Feb 24 09:54:54 crc systemd[1]: Starting Kubernetes Kubelet... Feb 24 09:54:54 crc restorecon[4736]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:54 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:54:55 crc restorecon[4736]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:54:55 crc restorecon[4736]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 24 09:54:56 crc kubenswrapper[4755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 09:54:56 crc kubenswrapper[4755]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 24 09:54:56 crc kubenswrapper[4755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 09:54:56 crc kubenswrapper[4755]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 09:54:56 crc kubenswrapper[4755]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 24 09:54:56 crc kubenswrapper[4755]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.023797 4755 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033060 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033111 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033124 4755 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033133 4755 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033143 4755 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033153 4755 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033161 4755 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033169 4755 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033177 4755 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033185 4755 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033203 4755 feature_gate.go:330] unrecognized feature gate: Example Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033211 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033219 4755 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033226 4755 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033234 4755 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033243 4755 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033250 4755 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033259 4755 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033266 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033274 4755 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033283 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033291 4755 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033299 4755 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033307 4755 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033314 4755 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033322 4755 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033330 4755 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033338 4755 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033345 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033353 4755 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033361 4755 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033371 4755 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033379 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033388 4755 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033396 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033405 4755 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033414 4755 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033422 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033429 4755 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033437 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033445 4755 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033453 4755 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033462 4755 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033469 4755 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033477 4755 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033485 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033493 4755 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033500 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033508 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033516 4755 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033523 4755 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033536 4755 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033546 4755 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033557 4755 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033566 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033577 4755 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033586 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033595 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033604 4755 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033615 4755 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033626 4755 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033636 4755 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033645 4755 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033653 4755 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033662 4755 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033698 4755 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033708 4755 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033717 4755 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033725 4755 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033733 4755 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.033742 4755 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.033923 4755 flags.go:64] FLAG: --address="0.0.0.0" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.033941 4755 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.033957 4755 flags.go:64] FLAG: --anonymous-auth="true" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.033968 4755 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.033979 4755 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.033989 4755 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034000 4755 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034012 4755 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034022 4755 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034031 4755 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034041 4755 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034050 4755 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034061 4755 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034103 4755 flags.go:64] FLAG: --cgroup-root="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034112 4755 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034121 4755 flags.go:64] FLAG: --client-ca-file="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034130 4755 flags.go:64] FLAG: --cloud-config="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034139 4755 flags.go:64] FLAG: --cloud-provider="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034148 4755 flags.go:64] FLAG: --cluster-dns="[]" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034158 4755 flags.go:64] FLAG: --cluster-domain="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034167 4755 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034176 4755 flags.go:64] FLAG: --config-dir="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034184 4755 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034194 4755 flags.go:64] FLAG: --container-log-max-files="5" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034206 4755 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034215 4755 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034224 4755 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034234 4755 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034243 4755 flags.go:64] FLAG: --contention-profiling="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034252 4755 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034261 4755 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034271 4755 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034280 4755 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034291 4755 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034314 4755 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034323 4755 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034332 4755 flags.go:64] FLAG: --enable-load-reader="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034341 4755 flags.go:64] FLAG: --enable-server="true" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034350 4755 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034362 4755 flags.go:64] FLAG: --event-burst="100" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034372 4755 flags.go:64] FLAG: --event-qps="50" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034381 4755 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034390 4755 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034400 4755 flags.go:64] FLAG: --eviction-hard="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034410 4755 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034420 4755 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034428 4755 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034438 4755 flags.go:64] FLAG: --eviction-soft="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034447 4755 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034456 4755 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034465 4755 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034474 4755 flags.go:64] FLAG: --experimental-mounter-path="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034483 4755 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034492 4755 flags.go:64] FLAG: --fail-swap-on="true" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034501 4755 flags.go:64] FLAG: --feature-gates="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034511 4755 flags.go:64] FLAG: --file-check-frequency="20s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034520 4755 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034531 4755 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034540 4755 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034551 4755 flags.go:64] FLAG: --healthz-port="10248" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034560 4755 flags.go:64] FLAG: --help="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034569 4755 flags.go:64] FLAG: --hostname-override="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034579 4755 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034588 4755 flags.go:64] FLAG: --http-check-frequency="20s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034597 4755 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034606 4755 flags.go:64] FLAG: --image-credential-provider-config="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034615 4755 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034624 4755 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034633 4755 flags.go:64] FLAG: --image-service-endpoint="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034641 4755 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034651 4755 flags.go:64] FLAG: --kube-api-burst="100" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034660 4755 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034669 4755 flags.go:64] FLAG: --kube-api-qps="50" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034678 4755 flags.go:64] FLAG: --kube-reserved="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034693 4755 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034702 4755 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034711 4755 flags.go:64] FLAG: --kubelet-cgroups="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034720 4755 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034729 4755 flags.go:64] FLAG: --lock-file="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034739 4755 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034748 4755 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034758 4755 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034771 4755 flags.go:64] FLAG: --log-json-split-stream="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034780 4755 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034789 4755 flags.go:64] FLAG: --log-text-split-stream="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034798 4755 flags.go:64] FLAG: --logging-format="text" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034807 4755 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034817 4755 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034825 4755 flags.go:64] FLAG: --manifest-url="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034834 4755 flags.go:64] FLAG: --manifest-url-header="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034845 4755 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034854 4755 flags.go:64] FLAG: --max-open-files="1000000" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034866 4755 flags.go:64] FLAG: --max-pods="110" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034875 4755 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034884 4755 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034893 4755 flags.go:64] FLAG: --memory-manager-policy="None" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034902 4755 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034911 4755 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034920 4755 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034929 4755 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034947 4755 flags.go:64] FLAG: --node-status-max-images="50" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034957 4755 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034966 4755 flags.go:64] FLAG: --oom-score-adj="-999" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034975 4755 flags.go:64] FLAG: --pod-cidr="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034984 4755 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.034998 4755 flags.go:64] FLAG: --pod-manifest-path="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035010 4755 flags.go:64] FLAG: --pod-max-pids="-1" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035019 4755 flags.go:64] FLAG: --pods-per-core="0" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035028 4755 flags.go:64] FLAG: --port="10250" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035037 4755 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035046 4755 flags.go:64] FLAG: --provider-id="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035056 4755 flags.go:64] FLAG: --qos-reserved="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035098 4755 flags.go:64] FLAG: --read-only-port="10255" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035108 4755 flags.go:64] FLAG: --register-node="true" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035116 4755 flags.go:64] FLAG: --register-schedulable="true" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035126 4755 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035141 4755 flags.go:64] FLAG: --registry-burst="10" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035149 4755 flags.go:64] FLAG: --registry-qps="5" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035158 4755 flags.go:64] FLAG: --reserved-cpus="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035167 4755 flags.go:64] FLAG: --reserved-memory="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035177 4755 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035187 4755 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035196 4755 flags.go:64] FLAG: --rotate-certificates="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035204 4755 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035213 4755 flags.go:64] FLAG: --runonce="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035222 4755 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035231 4755 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035240 4755 flags.go:64] FLAG: --seccomp-default="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035249 4755 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035258 4755 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035267 4755 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035276 4755 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035285 4755 flags.go:64] FLAG: --storage-driver-password="root" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035294 4755 flags.go:64] FLAG: --storage-driver-secure="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035303 4755 flags.go:64] FLAG: --storage-driver-table="stats" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035312 4755 flags.go:64] FLAG: --storage-driver-user="root" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035321 4755 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035330 4755 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035343 4755 flags.go:64] FLAG: --system-cgroups="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035351 4755 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035365 4755 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035374 4755 flags.go:64] FLAG: --tls-cert-file="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035389 4755 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035399 4755 flags.go:64] FLAG: --tls-min-version="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035408 4755 flags.go:64] FLAG: --tls-private-key-file="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035417 4755 flags.go:64] FLAG: --topology-manager-policy="none" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035426 4755 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035435 4755 flags.go:64] FLAG: --topology-manager-scope="container" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035444 4755 flags.go:64] FLAG: --v="2" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035455 4755 flags.go:64] FLAG: --version="false" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035466 4755 flags.go:64] FLAG: --vmodule="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035478 4755 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.035489 4755 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035717 4755 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035728 4755 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035738 4755 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035746 4755 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035755 4755 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035763 4755 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035771 4755 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035779 4755 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035787 4755 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035795 4755 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035803 4755 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035811 4755 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035822 4755 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035831 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035840 4755 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035850 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035859 4755 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035870 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035878 4755 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035886 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035894 4755 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035904 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035914 4755 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035923 4755 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035931 4755 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035939 4755 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035947 4755 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035955 4755 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035962 4755 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035970 4755 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035977 4755 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035987 4755 feature_gate.go:330] unrecognized feature gate: Example Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.035995 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036002 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036016 4755 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036023 4755 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036032 4755 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036040 4755 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036048 4755 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036055 4755 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036088 4755 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036097 4755 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036105 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036113 4755 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036122 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036129 4755 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036137 4755 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036145 4755 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036153 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036161 4755 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036169 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036177 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036185 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036193 4755 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036201 4755 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036209 4755 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036217 4755 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036224 4755 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036232 4755 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036240 4755 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036247 4755 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036255 4755 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036263 4755 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036271 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036279 4755 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036287 4755 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036297 4755 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036306 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036315 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036324 4755 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.036337 4755 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.036360 4755 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.048751 4755 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.048807 4755 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.048972 4755 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.048985 4755 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.048994 4755 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049002 4755 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049012 4755 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049021 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049030 4755 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049038 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049046 4755 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049054 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049062 4755 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049096 4755 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049106 4755 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049115 4755 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049124 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049132 4755 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049143 4755 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049154 4755 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049162 4755 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049171 4755 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049179 4755 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049189 4755 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049198 4755 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049206 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049215 4755 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049224 4755 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049233 4755 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049241 4755 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049250 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049260 4755 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049272 4755 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049281 4755 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049291 4755 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049299 4755 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049309 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049316 4755 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049325 4755 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049335 4755 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049345 4755 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049354 4755 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049362 4755 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049370 4755 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049378 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049386 4755 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049393 4755 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049403 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049413 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049423 4755 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049433 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049442 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049451 4755 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049462 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049472 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049481 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049491 4755 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049501 4755 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049511 4755 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049521 4755 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049530 4755 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049542 4755 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049552 4755 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049562 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049573 4755 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049582 4755 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049592 4755 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049602 4755 feature_gate.go:330] unrecognized feature gate: Example Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049610 4755 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049618 4755 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049625 4755 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049633 4755 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049643 4755 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.049656 4755 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049910 4755 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049924 4755 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049932 4755 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049942 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049951 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049959 4755 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049967 4755 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049977 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049984 4755 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.049992 4755 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050000 4755 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050011 4755 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050024 4755 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050033 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050041 4755 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050051 4755 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050058 4755 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050098 4755 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050107 4755 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050115 4755 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050124 4755 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050132 4755 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050139 4755 feature_gate.go:330] unrecognized feature gate: Example Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050147 4755 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050155 4755 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050164 4755 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050173 4755 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050181 4755 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050189 4755 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050197 4755 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050204 4755 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050213 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050222 4755 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050230 4755 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050238 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050246 4755 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050254 4755 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050262 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050270 4755 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050277 4755 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050285 4755 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050293 4755 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050301 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050311 4755 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050320 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050329 4755 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050338 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050347 4755 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050355 4755 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050362 4755 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050370 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050378 4755 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050386 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050395 4755 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050403 4755 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050410 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050418 4755 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050428 4755 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050438 4755 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050447 4755 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050455 4755 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050463 4755 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050472 4755 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050479 4755 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050488 4755 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050495 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050503 4755 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050511 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050518 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050527 4755 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.050535 4755 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.050547 4755 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.051734 4755 server.go:940] "Client rotation is on, will bootstrap in background" Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.057615 4755 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.064674 4755 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.065012 4755 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.067495 4755 server.go:997] "Starting client certificate rotation" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.067548 4755 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.067920 4755 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.095263 4755 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.098308 4755 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.099363 4755 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.114468 4755 log.go:25] "Validated CRI v1 runtime API" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.158191 4755 log.go:25] "Validated CRI v1 image API" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.164459 4755 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.171839 4755 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-24-09-50-14-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.171885 4755 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.206052 4755 manager.go:217] Machine: {Timestamp:2026-02-24 09:54:56.202392391 +0000 UTC m=+0.658914984 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:19ae84c6-820e-4b63-8116-0dc0088d14e8 BootID:716e957f-e154-4a81-a173-c5b7419cfbf1 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:61:ea:38 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:61:ea:38 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:86:d6:f3 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:8a:17:73 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a1:16:4e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:fc:7b:a7 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:59:6c:a1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:72:81:70:8d:3e:af Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:22:cd:1b:3f:bd:80 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.206569 4755 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.206807 4755 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.207355 4755 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.207745 4755 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.207818 4755 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.208261 4755 topology_manager.go:138] "Creating topology manager with none policy" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.208284 4755 container_manager_linux.go:303] "Creating device plugin manager" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.209210 4755 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.209248 4755 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.210603 4755 state_mem.go:36] "Initialized new in-memory state store" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.210761 4755 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.214752 4755 kubelet.go:418] "Attempting to sync node with API server" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.214792 4755 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.214834 4755 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.214860 4755 kubelet.go:324] "Adding apiserver pod source" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.214882 4755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.219619 4755 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.221509 4755 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.223436 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.223543 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.223425 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.223609 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.225771 4755 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.227609 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.227656 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.227672 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.227687 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.227712 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.227726 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.227741 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.227763 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.227783 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.227798 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.227819 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.227833 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.228966 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.229984 4755 server.go:1280] "Started kubelet" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.230246 4755 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.230264 4755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.231621 4755 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.231923 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 09:54:56 crc systemd[1]: Started Kubernetes Kubelet. Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.235792 4755 server.go:460] "Adding debug handlers to kubelet server" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.236910 4755 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.236980 4755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.237627 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.237667 4755 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.246782 4755 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.247483 4755 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.249022 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.249241 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.249481 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="200ms" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.249746 4755 factory.go:55] Registering systemd factory Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.250451 4755 factory.go:221] Registration of the systemd container factory successfully Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.251228 4755 factory.go:153] Registering CRI-O factory Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.251441 4755 factory.go:221] Registration of the crio container factory successfully Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.251724 4755 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.251977 4755 factory.go:103] Registering Raw factory Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.252246 4755 manager.go:1196] Started watching for new ooms in manager Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.248962 4755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.220:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897261fc1bebdf7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.229924343 +0000 UTC m=+0.686446926,LastTimestamp:2026-02-24 09:54:56.229924343 +0000 UTC m=+0.686446926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.259955 4755 manager.go:319] Starting recovery of all containers Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.265202 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.265452 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.265665 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.266047 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.266271 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.266469 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.266642 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.266813 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.266979 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.267178 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.267358 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.267585 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.267781 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.267965 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.268187 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.268391 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.268609 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.268816 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.269049 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.269312 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.269503 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.269752 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.269951 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.270178 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.270371 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276448 4755 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276544 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276583 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276609 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276634 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276656 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276678 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276697 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276717 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276766 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276787 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276808 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276828 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276854 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276874 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276896 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276917 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276935 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276956 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.276978 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.277000 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.277020 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.277041 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.277060 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.277105 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.277125 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.277145 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.277167 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.278624 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.278667 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.278692 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.278753 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.278777 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.278799 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.278823 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.278847 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.279964 4755 manager.go:324] Recovery completed Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.280686 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.280731 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.280752 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.280772 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.280813 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.280860 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.280897 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.280922 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.280948 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.280974 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.280998 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.281028 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.281052 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.281108 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.281129 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.281158 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.281181 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.281202 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.281223 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.281246 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.281267 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.281288 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.281310 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.281330 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282155 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282238 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282269 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282291 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282321 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282354 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282386 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282414 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282444 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282477 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282507 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282539 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282565 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282590 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282618 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282647 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282675 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282700 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282721 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282741 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282772 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282800 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282826 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282851 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282876 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282899 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282922 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282945 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282968 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.282989 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283029 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283049 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283100 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283123 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283145 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283164 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283185 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283205 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283225 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283246 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283267 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283288 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283308 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283329 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283363 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283387 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283413 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283440 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283460 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283481 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283501 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283536 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283556 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283577 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283599 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283617 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283638 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283657 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283677 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283698 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283716 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283737 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283756 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283775 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283794 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283813 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283845 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283865 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283885 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283903 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283922 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283942 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283964 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.283984 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284003 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284022 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284044 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284101 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284128 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284151 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284180 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284208 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284235 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284257 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284284 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284318 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284345 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284370 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284402 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284430 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284457 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284480 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284500 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284521 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284543 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284568 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284619 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284656 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284684 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284710 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284737 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284762 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284790 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284816 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284847 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284875 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284902 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284931 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284960 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.284983 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285004 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285052 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285107 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285130 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285150 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285171 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285190 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285216 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285237 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285257 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285277 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285296 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285314 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285339 4755 reconstruct.go:97] "Volume reconstruction finished" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.285359 4755 reconciler.go:26] "Reconciler: start to sync state" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.290902 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.292703 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.292769 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.292787 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.295084 4755 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.295107 4755 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.295127 4755 state_mem.go:36] "Initialized new in-memory state store" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.309717 4755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.312472 4755 policy_none.go:49] "None policy: Start" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.314992 4755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.315039 4755 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.315086 4755 kubelet.go:2335] "Starting kubelet main sync loop" Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.315141 4755 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.316307 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.316405 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.317675 4755 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.317727 4755 state_mem.go:35] "Initializing new in-memory state store" Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.346668 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.387373 4755 manager.go:334] "Starting Device Plugin manager" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.388008 4755 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.388033 4755 server.go:79] "Starting device plugin registration server" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.388977 4755 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.389218 4755 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.389909 4755 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.390298 4755 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.390442 4755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.402562 4755 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.415809 4755 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.415954 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.417659 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.417764 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.417783 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.418121 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.418328 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.418414 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.419772 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.419821 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.419870 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.420499 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.420563 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.420583 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.420741 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.421008 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.421131 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.421986 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.422056 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.422118 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.422327 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.422488 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.422557 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.424238 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.424279 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.424295 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.424297 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.424338 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.424361 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.424419 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.424479 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.424543 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.424635 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.424785 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.424844 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.425971 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.426005 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.426017 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.426191 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.426280 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.426305 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.426331 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.426373 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.427699 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.427744 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.427770 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.450366 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="400ms" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.487599 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.487689 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.487742 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.487785 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.487828 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.487872 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.487915 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.487959 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.488002 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.488044 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.488131 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.488172 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.488199 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.488239 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.488279 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.490262 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.492338 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.492408 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.492434 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.492490 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.493242 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.220:6443: connect: connection refused" node="crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590236 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590306 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590355 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590402 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590448 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590484 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590486 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590492 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590571 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590663 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590693 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590737 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590767 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590832 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590838 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590890 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590916 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590957 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.590948 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.591030 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.591044 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.591058 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.591138 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.591172 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.591168 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.591218 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.591225 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.591139 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.591339 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.591388 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.693480 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.695882 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.695933 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.695951 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.695983 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.696489 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.220:6443: connect: connection refused" node="crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.759403 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.784458 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.811562 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-9aacdf18e0dba20ecc0c4542719779d241bc8f33393c11edefd8d5eb17e50fdf WatchSource:0}: Error finding container 9aacdf18e0dba20ecc0c4542719779d241bc8f33393c11edefd8d5eb17e50fdf: Status 404 returned error can't find the container with id 9aacdf18e0dba20ecc0c4542719779d241bc8f33393c11edefd8d5eb17e50fdf Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.815028 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.841661 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.845394 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-127c2e742fa6df5aa24bef60a559cbd8494f7e91da4e5823bcf312098e83bd19 WatchSource:0}: Error finding container 127c2e742fa6df5aa24bef60a559cbd8494f7e91da4e5823bcf312098e83bd19: Status 404 returned error can't find the container with id 127c2e742fa6df5aa24bef60a559cbd8494f7e91da4e5823bcf312098e83bd19 Feb 24 09:54:56 crc kubenswrapper[4755]: I0224 09:54:56.849974 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:54:56 crc kubenswrapper[4755]: E0224 09:54:56.851230 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="800ms" Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.867306 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ccf5c523c439cd18cfd0ff4b9c8fad50913d4660ffe0c850b4f72b716a993684 WatchSource:0}: Error finding container ccf5c523c439cd18cfd0ff4b9c8fad50913d4660ffe0c850b4f72b716a993684: Status 404 returned error can't find the container with id ccf5c523c439cd18cfd0ff4b9c8fad50913d4660ffe0c850b4f72b716a993684 Feb 24 09:54:56 crc kubenswrapper[4755]: W0224 09:54:56.870807 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e73578d4a14ca21a90a3220b332309898d3cd735a7ea0426788bc28064ab2530 WatchSource:0}: Error finding container e73578d4a14ca21a90a3220b332309898d3cd735a7ea0426788bc28064ab2530: Status 404 returned error can't find the container with id e73578d4a14ca21a90a3220b332309898d3cd735a7ea0426788bc28064ab2530 Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.096836 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.099054 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.099100 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.099109 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.099129 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:54:57 crc kubenswrapper[4755]: E0224 09:54:57.099637 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.220:6443: connect: connection refused" node="crc" Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.232993 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.330681 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e73578d4a14ca21a90a3220b332309898d3cd735a7ea0426788bc28064ab2530"} Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.333039 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ccf5c523c439cd18cfd0ff4b9c8fad50913d4660ffe0c850b4f72b716a993684"} Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.334121 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"127c2e742fa6df5aa24bef60a559cbd8494f7e91da4e5823bcf312098e83bd19"} Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.335052 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"72d1a6be1c5ad6e4dbbe389740afa983c7520294444d44860faf1d99adf6181f"} Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.335960 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9aacdf18e0dba20ecc0c4542719779d241bc8f33393c11edefd8d5eb17e50fdf"} Feb 24 09:54:57 crc kubenswrapper[4755]: E0224 09:54:57.361662 4755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.220:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897261fc1bebdf7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.229924343 +0000 UTC m=+0.686446926,LastTimestamp:2026-02-24 09:54:56.229924343 +0000 UTC m=+0.686446926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:54:57 crc kubenswrapper[4755]: W0224 09:54:57.604623 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 09:54:57 crc kubenswrapper[4755]: E0224 09:54:57.605213 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:54:57 crc kubenswrapper[4755]: E0224 09:54:57.652913 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="1.6s" Feb 24 09:54:57 crc kubenswrapper[4755]: W0224 09:54:57.727427 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 09:54:57 crc kubenswrapper[4755]: E0224 09:54:57.727757 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:54:57 crc kubenswrapper[4755]: W0224 09:54:57.737153 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 09:54:57 crc kubenswrapper[4755]: E0224 09:54:57.737373 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:54:57 crc kubenswrapper[4755]: W0224 09:54:57.787471 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 09:54:57 crc kubenswrapper[4755]: E0224 09:54:57.787551 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.899887 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.902595 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.902670 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.902698 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:57 crc kubenswrapper[4755]: I0224 09:54:57.902745 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:54:57 crc kubenswrapper[4755]: E0224 09:54:57.903541 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.220:6443: connect: connection refused" node="crc" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.234002 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.279745 4755 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 09:54:58 crc kubenswrapper[4755]: E0224 09:54:58.280865 4755 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.340921 4755 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8208ba5fe11546cc7dafb93073843511b97889983edac56135c17a41ae318f48" exitCode=0 Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.341027 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8208ba5fe11546cc7dafb93073843511b97889983edac56135c17a41ae318f48"} Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.341138 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.342517 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.342587 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.342612 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.344216 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a"} Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.344284 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732"} Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.344302 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f"} Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.344312 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.344319 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90"} Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.345470 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.345529 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.345546 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.347692 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472" exitCode=0 Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.347801 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472"} Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.347872 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.349170 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.349217 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.349237 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.350455 4755 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c" exitCode=0 Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.350532 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c"} Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.350667 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.350680 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.352191 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.352230 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.352243 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.352269 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.352248 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.352281 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.352970 4755 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4" exitCode=0 Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.353002 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4"} Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.353159 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.354016 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.354090 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:58 crc kubenswrapper[4755]: I0224 09:54:58.354108 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.233033 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 09:54:59 crc kubenswrapper[4755]: E0224 09:54:59.254374 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="3.2s" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.359544 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e018bfbfc779c88c09f9a0d316c3ed04f815ee4b2b7ec6efe15eba1de075939e"} Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.359615 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.361762 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.361807 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.361821 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.366977 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0"} Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.367013 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9"} Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.367024 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e"} Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.367034 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f"} Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.371662 4755 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b" exitCode=0 Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.371830 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b"} Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.373334 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.375973 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.376002 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.376013 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.376437 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.376581 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.376714 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6c839041bf4a2cd96fe63f169ec201656be8f3343fb5398ecce2c2c0848e9987"} Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.377650 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"eb7da773bd75b944d3f90cf4f68330fccbbfceb6441356b2561b48f2d7d90a4b"} Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.377678 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3e84672e8d77bad12c4a89084f19b7bab0dd4c0caaaa038c165095b2d509910b"} Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.377596 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.377730 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.377745 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.377921 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.377941 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.377953 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.500651 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.504210 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.505567 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.505630 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.505643 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:54:59 crc kubenswrapper[4755]: I0224 09:54:59.505677 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:54:59 crc kubenswrapper[4755]: E0224 09:54:59.506328 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.220:6443: connect: connection refused" node="crc" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.386957 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"820bdc7f9c7a0bf31efd9360f5028b6f7e4a6dd0aeed9b139030da8722f19fef"} Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.387236 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.389015 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.389062 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.389116 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.393115 4755 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2" exitCode=0 Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.393247 4755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.393290 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.393304 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.393339 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.393394 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.394514 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2"} Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.395079 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.395109 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.395119 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.395376 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.395424 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.395443 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.395464 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.395501 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.395523 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.395591 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.395648 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.395667 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:00 crc kubenswrapper[4755]: I0224 09:55:00.505994 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:55:01 crc kubenswrapper[4755]: I0224 09:55:01.401230 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d"} Feb 24 09:55:01 crc kubenswrapper[4755]: I0224 09:55:01.401296 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6"} Feb 24 09:55:01 crc kubenswrapper[4755]: I0224 09:55:01.401318 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3"} Feb 24 09:55:01 crc kubenswrapper[4755]: I0224 09:55:01.401375 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:01 crc kubenswrapper[4755]: I0224 09:55:01.402953 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:01 crc kubenswrapper[4755]: I0224 09:55:01.403001 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:01 crc kubenswrapper[4755]: I0224 09:55:01.403013 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.409795 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a"} Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.409856 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701"} Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.409919 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.409957 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.410894 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.410930 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.410942 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.411536 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.411583 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.411602 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.500837 4755 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.500927 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.618755 4755 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.706913 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.708880 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.708936 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.708954 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:02 crc kubenswrapper[4755]: I0224 09:55:02.708987 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:55:03 crc kubenswrapper[4755]: I0224 09:55:03.413231 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:03 crc kubenswrapper[4755]: I0224 09:55:03.414700 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:03 crc kubenswrapper[4755]: I0224 09:55:03.414769 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:03 crc kubenswrapper[4755]: I0224 09:55:03.414792 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:04 crc kubenswrapper[4755]: I0224 09:55:04.935559 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:55:04 crc kubenswrapper[4755]: I0224 09:55:04.935868 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:04 crc kubenswrapper[4755]: I0224 09:55:04.937517 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:04 crc kubenswrapper[4755]: I0224 09:55:04.937605 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:04 crc kubenswrapper[4755]: I0224 09:55:04.937628 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.154273 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.312457 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.312710 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.314276 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.314346 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.314373 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.385351 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.418652 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.419405 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.420140 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.420197 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.420219 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.421170 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.421220 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:05 crc kubenswrapper[4755]: I0224 09:55:05.421242 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.400276 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.400489 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.402017 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.402120 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.402143 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:06 crc kubenswrapper[4755]: E0224 09:55:06.403429 4755 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.450841 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.451145 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.452558 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.452611 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.452632 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.459175 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.543417 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.543660 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.545031 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.545143 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:06 crc kubenswrapper[4755]: I0224 09:55:06.545166 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:07 crc kubenswrapper[4755]: I0224 09:55:07.423881 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:07 crc kubenswrapper[4755]: I0224 09:55:07.425239 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:07 crc kubenswrapper[4755]: I0224 09:55:07.425303 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:07 crc kubenswrapper[4755]: I0224 09:55:07.425319 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:07 crc kubenswrapper[4755]: I0224 09:55:07.430529 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:55:08 crc kubenswrapper[4755]: I0224 09:55:08.426901 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:08 crc kubenswrapper[4755]: I0224 09:55:08.428100 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:08 crc kubenswrapper[4755]: I0224 09:55:08.428144 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:08 crc kubenswrapper[4755]: I0224 09:55:08.428162 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:08 crc kubenswrapper[4755]: I0224 09:55:08.659188 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 24 09:55:08 crc kubenswrapper[4755]: I0224 09:55:08.659551 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:08 crc kubenswrapper[4755]: I0224 09:55:08.661896 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:08 crc kubenswrapper[4755]: I0224 09:55:08.661939 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:08 crc kubenswrapper[4755]: I0224 09:55:08.661956 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:10 crc kubenswrapper[4755]: W0224 09:55:10.012699 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.012864 4755 trace.go:236] Trace[1771657390]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Feb-2026 09:55:00.010) (total time: 10002ms): Feb 24 09:55:10 crc kubenswrapper[4755]: Trace[1771657390]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (09:55:10.012) Feb 24 09:55:10 crc kubenswrapper[4755]: Trace[1771657390]: [10.002412908s] [10.002412908s] END Feb 24 09:55:10 crc kubenswrapper[4755]: E0224 09:55:10.012900 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 24 09:55:10 crc kubenswrapper[4755]: W0224 09:55:10.214931 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.215030 4755 trace.go:236] Trace[1374766888]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Feb-2026 09:55:00.213) (total time: 10001ms): Feb 24 09:55:10 crc kubenswrapper[4755]: Trace[1374766888]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:55:10.214) Feb 24 09:55:10 crc kubenswrapper[4755]: Trace[1374766888]: [10.001607271s] [10.001607271s] END Feb 24 09:55:10 crc kubenswrapper[4755]: E0224 09:55:10.215059 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.234098 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 24 09:55:10 crc kubenswrapper[4755]: W0224 09:55:10.360309 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:10Z is after 2026-02-23T05:33:13Z Feb 24 09:55:10 crc kubenswrapper[4755]: E0224 09:55:10.360431 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:10Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:55:10 crc kubenswrapper[4755]: E0224 09:55:10.360568 4755 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:10Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:55:10 crc kubenswrapper[4755]: W0224 09:55:10.361599 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:10Z is after 2026-02-23T05:33:13Z Feb 24 09:55:10 crc kubenswrapper[4755]: E0224 09:55:10.361681 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:10Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:55:10 crc kubenswrapper[4755]: E0224 09:55:10.364774 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:10Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 09:55:10 crc kubenswrapper[4755]: E0224 09:55:10.369315 4755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:10Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897261fc1bebdf7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.229924343 +0000 UTC m=+0.686446926,LastTimestamp:2026-02-24 09:54:56.229924343 +0000 UTC m=+0.686446926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.371822 4755 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.371880 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 24 09:55:10 crc kubenswrapper[4755]: E0224 09:55:10.373874 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:10Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.377381 4755 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.377468 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.433744 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.436332 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="820bdc7f9c7a0bf31efd9360f5028b6f7e4a6dd0aeed9b139030da8722f19fef" exitCode=255 Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.436377 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"820bdc7f9c7a0bf31efd9360f5028b6f7e4a6dd0aeed9b139030da8722f19fef"} Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.436532 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.437410 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.437448 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.437458 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:10 crc kubenswrapper[4755]: I0224 09:55:10.438051 4755 scope.go:117] "RemoveContainer" containerID="820bdc7f9c7a0bf31efd9360f5028b6f7e4a6dd0aeed9b139030da8722f19fef" Feb 24 09:55:11 crc kubenswrapper[4755]: I0224 09:55:11.237052 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:11Z is after 2026-02-23T05:33:13Z Feb 24 09:55:11 crc kubenswrapper[4755]: I0224 09:55:11.440603 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 09:55:11 crc kubenswrapper[4755]: I0224 09:55:11.442389 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"60af07f95b287b6fdda86078dff5864416379a816e01ac47eddaeb6c27ebb2a2"} Feb 24 09:55:11 crc kubenswrapper[4755]: I0224 09:55:11.442553 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:11 crc kubenswrapper[4755]: I0224 09:55:11.443380 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:11 crc kubenswrapper[4755]: I0224 09:55:11.443425 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:11 crc kubenswrapper[4755]: I0224 09:55:11.443443 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:12 crc kubenswrapper[4755]: I0224 09:55:12.237553 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:12Z is after 2026-02-23T05:33:13Z Feb 24 09:55:12 crc kubenswrapper[4755]: I0224 09:55:12.446872 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 09:55:12 crc kubenswrapper[4755]: I0224 09:55:12.447655 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 09:55:12 crc kubenswrapper[4755]: I0224 09:55:12.449859 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="60af07f95b287b6fdda86078dff5864416379a816e01ac47eddaeb6c27ebb2a2" exitCode=255 Feb 24 09:55:12 crc kubenswrapper[4755]: I0224 09:55:12.449897 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"60af07f95b287b6fdda86078dff5864416379a816e01ac47eddaeb6c27ebb2a2"} Feb 24 09:55:12 crc kubenswrapper[4755]: I0224 09:55:12.449950 4755 scope.go:117] "RemoveContainer" containerID="820bdc7f9c7a0bf31efd9360f5028b6f7e4a6dd0aeed9b139030da8722f19fef" Feb 24 09:55:12 crc kubenswrapper[4755]: I0224 09:55:12.450170 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:12 crc kubenswrapper[4755]: I0224 09:55:12.451415 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:12 crc kubenswrapper[4755]: I0224 09:55:12.451448 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:12 crc kubenswrapper[4755]: I0224 09:55:12.451458 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:12 crc kubenswrapper[4755]: I0224 09:55:12.451917 4755 scope.go:117] "RemoveContainer" containerID="60af07f95b287b6fdda86078dff5864416379a816e01ac47eddaeb6c27ebb2a2" Feb 24 09:55:12 crc kubenswrapper[4755]: E0224 09:55:12.452136 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:55:12 crc kubenswrapper[4755]: I0224 09:55:12.502268 4755 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:55:12 crc kubenswrapper[4755]: I0224 09:55:12.502403 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:55:13 crc kubenswrapper[4755]: I0224 09:55:13.235773 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:13Z is after 2026-02-23T05:33:13Z Feb 24 09:55:13 crc kubenswrapper[4755]: I0224 09:55:13.455152 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 09:55:13 crc kubenswrapper[4755]: I0224 09:55:13.907221 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:55:13 crc kubenswrapper[4755]: I0224 09:55:13.907495 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:13 crc kubenswrapper[4755]: I0224 09:55:13.909056 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:13 crc kubenswrapper[4755]: I0224 09:55:13.909139 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:13 crc kubenswrapper[4755]: I0224 09:55:13.909159 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:13 crc kubenswrapper[4755]: I0224 09:55:13.909960 4755 scope.go:117] "RemoveContainer" containerID="60af07f95b287b6fdda86078dff5864416379a816e01ac47eddaeb6c27ebb2a2" Feb 24 09:55:13 crc kubenswrapper[4755]: E0224 09:55:13.910265 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:55:14 crc kubenswrapper[4755]: I0224 09:55:14.237606 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:14Z is after 2026-02-23T05:33:13Z Feb 24 09:55:14 crc kubenswrapper[4755]: W0224 09:55:14.774760 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:14Z is after 2026-02-23T05:33:13Z Feb 24 09:55:14 crc kubenswrapper[4755]: E0224 09:55:14.774867 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:14Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:55:14 crc kubenswrapper[4755]: I0224 09:55:14.943911 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:55:14 crc kubenswrapper[4755]: I0224 09:55:14.944231 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:14 crc kubenswrapper[4755]: I0224 09:55:14.945964 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:14 crc kubenswrapper[4755]: I0224 09:55:14.946042 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:14 crc kubenswrapper[4755]: I0224 09:55:14.946111 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:14 crc kubenswrapper[4755]: I0224 09:55:14.947228 4755 scope.go:117] "RemoveContainer" containerID="60af07f95b287b6fdda86078dff5864416379a816e01ac47eddaeb6c27ebb2a2" Feb 24 09:55:14 crc kubenswrapper[4755]: E0224 09:55:14.947551 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:55:14 crc kubenswrapper[4755]: I0224 09:55:14.952201 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:55:15 crc kubenswrapper[4755]: W0224 09:55:15.077045 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:15Z is after 2026-02-23T05:33:13Z Feb 24 09:55:15 crc kubenswrapper[4755]: E0224 09:55:15.077234 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:15Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:55:15 crc kubenswrapper[4755]: I0224 09:55:15.237871 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:15Z is after 2026-02-23T05:33:13Z Feb 24 09:55:15 crc kubenswrapper[4755]: I0224 09:55:15.463806 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:15 crc kubenswrapper[4755]: I0224 09:55:15.465090 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:15 crc kubenswrapper[4755]: I0224 09:55:15.465128 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:15 crc kubenswrapper[4755]: I0224 09:55:15.465147 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:15 crc kubenswrapper[4755]: I0224 09:55:15.465920 4755 scope.go:117] "RemoveContainer" containerID="60af07f95b287b6fdda86078dff5864416379a816e01ac47eddaeb6c27ebb2a2" Feb 24 09:55:15 crc kubenswrapper[4755]: E0224 09:55:15.466204 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:55:15 crc kubenswrapper[4755]: W0224 09:55:15.988339 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:15Z is after 2026-02-23T05:33:13Z Feb 24 09:55:15 crc kubenswrapper[4755]: E0224 09:55:15.988425 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:15Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:55:16 crc kubenswrapper[4755]: I0224 09:55:16.237642 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:16Z is after 2026-02-23T05:33:13Z Feb 24 09:55:16 crc kubenswrapper[4755]: E0224 09:55:16.403602 4755 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:55:16 crc kubenswrapper[4755]: W0224 09:55:16.653906 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:16Z is after 2026-02-23T05:33:13Z Feb 24 09:55:16 crc kubenswrapper[4755]: E0224 09:55:16.654026 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:55:16 crc kubenswrapper[4755]: I0224 09:55:16.765226 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:16 crc kubenswrapper[4755]: I0224 09:55:16.766713 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:16 crc kubenswrapper[4755]: I0224 09:55:16.766764 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:16 crc kubenswrapper[4755]: I0224 09:55:16.766784 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:16 crc kubenswrapper[4755]: I0224 09:55:16.766822 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:55:16 crc kubenswrapper[4755]: E0224 09:55:16.771743 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:16Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 09:55:16 crc kubenswrapper[4755]: E0224 09:55:16.779727 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:16Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 09:55:17 crc kubenswrapper[4755]: I0224 09:55:17.237681 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:55:17Z is after 2026-02-23T05:33:13Z Feb 24 09:55:18 crc kubenswrapper[4755]: I0224 09:55:18.237429 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:18 crc kubenswrapper[4755]: I0224 09:55:18.703999 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 24 09:55:18 crc kubenswrapper[4755]: I0224 09:55:18.704313 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:18 crc kubenswrapper[4755]: I0224 09:55:18.706581 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:18 crc kubenswrapper[4755]: I0224 09:55:18.706674 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:18 crc kubenswrapper[4755]: I0224 09:55:18.706692 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:18 crc kubenswrapper[4755]: I0224 09:55:18.728930 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 24 09:55:19 crc kubenswrapper[4755]: I0224 09:55:19.094291 4755 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 09:55:19 crc kubenswrapper[4755]: I0224 09:55:19.111986 4755 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 09:55:19 crc kubenswrapper[4755]: I0224 09:55:19.240380 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:19 crc kubenswrapper[4755]: I0224 09:55:19.476016 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:19 crc kubenswrapper[4755]: I0224 09:55:19.477496 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:19 crc kubenswrapper[4755]: I0224 09:55:19.477571 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:19 crc kubenswrapper[4755]: I0224 09:55:19.477595 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:20 crc kubenswrapper[4755]: I0224 09:55:20.239610 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.377259 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc1bebdf7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.229924343 +0000 UTC m=+0.686446926,LastTimestamp:2026-02-24 09:54:56.229924343 +0000 UTC m=+0.686446926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.383522 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57d4cf4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292744436 +0000 UTC m=+0.749267009,LastTimestamp:2026-02-24 09:54:56.292744436 +0000 UTC m=+0.749267009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.389399 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57dd977 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292780407 +0000 UTC m=+0.749302990,LastTimestamp:2026-02-24 09:54:56.292780407 +0000 UTC m=+0.749302990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.393806 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57e1ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292797607 +0000 UTC m=+0.749320190,LastTimestamp:2026-02-24 09:54:56.292797607 +0000 UTC m=+0.749320190,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.399366 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fcbacb568 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.396514664 +0000 UTC m=+0.853037217,LastTimestamp:2026-02-24 09:54:56.396514664 +0000 UTC m=+0.853037217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.407004 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57d4cf4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57d4cf4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292744436 +0000 UTC m=+0.749267009,LastTimestamp:2026-02-24 09:54:56.417708104 +0000 UTC m=+0.874230687,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.413836 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57dd977\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57dd977 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292780407 +0000 UTC m=+0.749302990,LastTimestamp:2026-02-24 09:54:56.417776765 +0000 UTC m=+0.874299348,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.421265 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57e1ca7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57e1ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292797607 +0000 UTC m=+0.749320190,LastTimestamp:2026-02-24 09:54:56.417792925 +0000 UTC m=+0.874315508,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.428034 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57d4cf4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57d4cf4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292744436 +0000 UTC m=+0.749267009,LastTimestamp:2026-02-24 09:54:56.419803455 +0000 UTC m=+0.876326038,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.434878 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57dd977\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57dd977 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292780407 +0000 UTC m=+0.749302990,LastTimestamp:2026-02-24 09:54:56.419864276 +0000 UTC m=+0.876386849,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.442061 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57e1ca7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57e1ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292797607 +0000 UTC m=+0.749320190,LastTimestamp:2026-02-24 09:54:56.419880527 +0000 UTC m=+0.876403110,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.447508 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57d4cf4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57d4cf4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292744436 +0000 UTC m=+0.749267009,LastTimestamp:2026-02-24 09:54:56.42053797 +0000 UTC m=+0.877060553,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.454445 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57dd977\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57dd977 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292780407 +0000 UTC m=+0.749302990,LastTimestamp:2026-02-24 09:54:56.420576431 +0000 UTC m=+0.877099014,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.460893 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57e1ca7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57e1ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292797607 +0000 UTC m=+0.749320190,LastTimestamp:2026-02-24 09:54:56.420593421 +0000 UTC m=+0.877116004,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.467715 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57d4cf4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57d4cf4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292744436 +0000 UTC m=+0.749267009,LastTimestamp:2026-02-24 09:54:56.422037389 +0000 UTC m=+0.878559972,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.474506 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57dd977\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57dd977 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292780407 +0000 UTC m=+0.749302990,LastTimestamp:2026-02-24 09:54:56.422110341 +0000 UTC m=+0.878632914,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.479972 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57e1ca7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57e1ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292797607 +0000 UTC m=+0.749320190,LastTimestamp:2026-02-24 09:54:56.422128171 +0000 UTC m=+0.878650744,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.486973 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57d4cf4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57d4cf4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292744436 +0000 UTC m=+0.749267009,LastTimestamp:2026-02-24 09:54:56.424268774 +0000 UTC m=+0.880791357,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.491361 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57dd977\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57dd977 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292780407 +0000 UTC m=+0.749302990,LastTimestamp:2026-02-24 09:54:56.424289235 +0000 UTC m=+0.880811808,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.494033 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57e1ca7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57e1ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292797607 +0000 UTC m=+0.749320190,LastTimestamp:2026-02-24 09:54:56.424306195 +0000 UTC m=+0.880828768,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.498716 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57d4cf4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57d4cf4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292744436 +0000 UTC m=+0.749267009,LastTimestamp:2026-02-24 09:54:56.424325525 +0000 UTC m=+0.880848098,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.500833 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57dd977\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57dd977 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292780407 +0000 UTC m=+0.749302990,LastTimestamp:2026-02-24 09:54:56.424352726 +0000 UTC m=+0.880875309,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.504421 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57e1ca7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57e1ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292797607 +0000 UTC m=+0.749320190,LastTimestamp:2026-02-24 09:54:56.424372926 +0000 UTC m=+0.880895509,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: I0224 09:55:20.506308 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:55:20 crc kubenswrapper[4755]: I0224 09:55:20.506614 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:20 crc kubenswrapper[4755]: I0224 09:55:20.508820 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:20 crc kubenswrapper[4755]: I0224 09:55:20.508879 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:20 crc kubenswrapper[4755]: I0224 09:55:20.508897 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.509217 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57d4cf4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57d4cf4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292744436 +0000 UTC m=+0.749267009,LastTimestamp:2026-02-24 09:54:56.424454678 +0000 UTC m=+0.880977231,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: I0224 09:55:20.509859 4755 scope.go:117] "RemoveContainer" containerID="60af07f95b287b6fdda86078dff5864416379a816e01ac47eddaeb6c27ebb2a2" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.510205 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.515765 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897261fc57dd977\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897261fc57dd977 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.292780407 +0000 UTC m=+0.749302990,LastTimestamp:2026-02-24 09:54:56.424489969 +0000 UTC m=+0.881012522,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.523628 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897261fe511235c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.822526812 +0000 UTC m=+1.279049385,LastTimestamp:2026-02-24 09:54:56.822526812 +0000 UTC m=+1.279049385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.530015 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897261fe58e1a3a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.830716474 +0000 UTC m=+1.287239057,LastTimestamp:2026-02-24 09:54:56.830716474 +0000 UTC m=+1.287239057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.536515 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897261fe733042a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.858301482 +0000 UTC m=+1.314824035,LastTimestamp:2026-02-24 09:54:56.858301482 +0000 UTC m=+1.314824035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.543391 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897261fe82148f5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.873916661 +0000 UTC m=+1.330439214,LastTimestamp:2026-02-24 09:54:56.873916661 +0000 UTC m=+1.330439214,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.549828 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897261fe8426b14 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:56.876088084 +0000 UTC m=+1.332610637,LastTimestamp:2026-02-24 09:54:56.876088084 +0000 UTC m=+1.332610637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.558133 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189726200919097a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.42702425 +0000 UTC m=+1.883546793,LastTimestamp:2026-02-24 09:54:57.42702425 +0000 UTC m=+1.883546793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.564975 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189726200927d4dc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.42799382 +0000 UTC m=+1.884516363,LastTimestamp:2026-02-24 09:54:57.42799382 +0000 UTC m=+1.884516363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.572467 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18972620092ce6f9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.428326137 +0000 UTC m=+1.884848700,LastTimestamp:2026-02-24 09:54:57.428326137 +0000 UTC m=+1.884848700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.579139 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189726200930263d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.428538941 +0000 UTC m=+1.885061484,LastTimestamp:2026-02-24 09:54:57.428538941 +0000 UTC m=+1.885061484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.586620 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897262009bd9c65 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.437809765 +0000 UTC m=+1.894332308,LastTimestamp:2026-02-24 09:54:57.437809765 +0000 UTC m=+1.894332308,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.593764 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897262009d20c19 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.439149081 +0000 UTC m=+1.895671624,LastTimestamp:2026-02-24 09:54:57.439149081 +0000 UTC m=+1.895671624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.600959 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897262009e1ca56 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.440180822 +0000 UTC m=+1.896703385,LastTimestamp:2026-02-24 09:54:57.440180822 +0000 UTC m=+1.896703385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.607338 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897262009e5d1af openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.440444847 +0000 UTC m=+1.896967390,LastTimestamp:2026-02-24 09:54:57.440444847 +0000 UTC m=+1.896967390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.614521 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897262009ebd481 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.440838785 +0000 UTC m=+1.897361348,LastTimestamp:2026-02-24 09:54:57.440838785 +0000 UTC m=+1.897361348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.621680 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897262009eebab4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.441028788 +0000 UTC m=+1.897551331,LastTimestamp:2026-02-24 09:54:57.441028788 +0000 UTC m=+1.897551331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.628550 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189726200aa2981c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.452816412 +0000 UTC m=+1.909338955,LastTimestamp:2026-02-24 09:54:57.452816412 +0000 UTC m=+1.909338955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.635683 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189726201bfbf5b1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.743885745 +0000 UTC m=+2.200408328,LastTimestamp:2026-02-24 09:54:57.743885745 +0000 UTC m=+2.200408328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.640944 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189726201cbddc7d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.756593277 +0000 UTC m=+2.213115830,LastTimestamp:2026-02-24 09:54:57.756593277 +0000 UTC m=+2.213115830,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.646341 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189726201cd1dbb1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.757903793 +0000 UTC m=+2.214426346,LastTimestamp:2026-02-24 09:54:57.757903793 +0000 UTC m=+2.214426346,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.652711 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189726202a85d644 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.987802692 +0000 UTC m=+2.444325235,LastTimestamp:2026-02-24 09:54:57.987802692 +0000 UTC m=+2.444325235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.659885 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189726202b588a70 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.001611376 +0000 UTC m=+2.458133929,LastTimestamp:2026-02-24 09:54:58.001611376 +0000 UTC m=+2.458133929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.669183 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189726202b74cbe7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.003463143 +0000 UTC m=+2.459985716,LastTimestamp:2026-02-24 09:54:58.003463143 +0000 UTC m=+2.459985716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.675038 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18972620395e51c8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.236871112 +0000 UTC m=+2.693393665,LastTimestamp:2026-02-24 09:54:58.236871112 +0000 UTC m=+2.693393665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.681488 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189726203a74514f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.255089999 +0000 UTC m=+2.711612542,LastTimestamp:2026-02-24 09:54:58.255089999 +0000 UTC m=+2.711612542,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.687645 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189726203fedeee9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.346946281 +0000 UTC m=+2.803468834,LastTimestamp:2026-02-24 09:54:58.346946281 +0000 UTC m=+2.803468834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.694103 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897262040240e39 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.350493241 +0000 UTC m=+2.807015794,LastTimestamp:2026-02-24 09:54:58.350493241 +0000 UTC m=+2.807015794,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.700201 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897262040649d3a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.354724154 +0000 UTC m=+2.811246707,LastTimestamp:2026-02-24 09:54:58.354724154 +0000 UTC m=+2.811246707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.701776 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18972620407fe964 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.356513124 +0000 UTC m=+2.813035707,LastTimestamp:2026-02-24 09:54:58.356513124 +0000 UTC m=+2.813035707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.707626 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189726204e0fab6f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.584038255 +0000 UTC m=+3.040560798,LastTimestamp:2026-02-24 09:54:58.584038255 +0000 UTC m=+3.040560798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.714656 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189726204e22d39b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.585293723 +0000 UTC m=+3.041816266,LastTimestamp:2026-02-24 09:54:58.585293723 +0000 UTC m=+3.041816266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.720017 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189726204e734d6e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.59056779 +0000 UTC m=+3.047090323,LastTimestamp:2026-02-24 09:54:58.59056779 +0000 UTC m=+3.047090323,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.727454 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189726204ef25fec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.598895596 +0000 UTC m=+3.055418139,LastTimestamp:2026-02-24 09:54:58.598895596 +0000 UTC m=+3.055418139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.732873 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189726204efbedae openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.59952171 +0000 UTC m=+3.056044253,LastTimestamp:2026-02-24 09:54:58.59952171 +0000 UTC m=+3.056044253,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.739667 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189726204f035422 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.60000669 +0000 UTC m=+3.056529233,LastTimestamp:2026-02-24 09:54:58.60000669 +0000 UTC m=+3.056529233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.745390 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189726204f0d4b49 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.600659785 +0000 UTC m=+3.057182328,LastTimestamp:2026-02-24 09:54:58.600659785 +0000 UTC m=+3.057182328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.751305 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189726204f4ff7b6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.605029302 +0000 UTC m=+3.061551845,LastTimestamp:2026-02-24 09:54:58.605029302 +0000 UTC m=+3.061551845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.758354 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18972620509b4a65 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.626742885 +0000 UTC m=+3.083265428,LastTimestamp:2026-02-24 09:54:58.626742885 +0000 UTC m=+3.083265428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.764832 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897262050ad3ac4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.627918532 +0000 UTC m=+3.084441085,LastTimestamp:2026-02-24 09:54:58.627918532 +0000 UTC m=+3.084441085,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.772288 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189726205a7a3ec3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.792349379 +0000 UTC m=+3.248871932,LastTimestamp:2026-02-24 09:54:58.792349379 +0000 UTC m=+3.248871932,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.779552 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189726205bb4e1fb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.812969467 +0000 UTC m=+3.269492030,LastTimestamp:2026-02-24 09:54:58.812969467 +0000 UTC m=+3.269492030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.786695 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189726205bcb515a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.81443977 +0000 UTC m=+3.270962313,LastTimestamp:2026-02-24 09:54:58.81443977 +0000 UTC m=+3.270962313,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.794113 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189726205dc58604 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.847614468 +0000 UTC m=+3.304137021,LastTimestamp:2026-02-24 09:54:58.847614468 +0000 UTC m=+3.304137021,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.800937 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189726205ec8259f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.864563615 +0000 UTC m=+3.321086158,LastTimestamp:2026-02-24 09:54:58.864563615 +0000 UTC m=+3.321086158,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.807957 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189726205edf6ffa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:58.866089978 +0000 UTC m=+3.322612541,LastTimestamp:2026-02-24 09:54:58.866089978 +0000 UTC m=+3.322612541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.813726 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18972620697d9a9d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.044227741 +0000 UTC m=+3.500750284,LastTimestamp:2026-02-24 09:54:59.044227741 +0000 UTC m=+3.500750284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.819238 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189726206a7c21e0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.060908512 +0000 UTC m=+3.517431055,LastTimestamp:2026-02-24 09:54:59.060908512 +0000 UTC m=+3.517431055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.825803 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189726206b759652 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.077256786 +0000 UTC m=+3.533779319,LastTimestamp:2026-02-24 09:54:59.077256786 +0000 UTC m=+3.533779319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.832362 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189726206cf125d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.102131669 +0000 UTC m=+3.558654252,LastTimestamp:2026-02-24 09:54:59.102131669 +0000 UTC m=+3.558654252,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.838899 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189726206d0a1092 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.103764626 +0000 UTC m=+3.560287169,LastTimestamp:2026-02-24 09:54:59.103764626 +0000 UTC m=+3.560287169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.846477 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18972620793f2e3d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.308572221 +0000 UTC m=+3.765094764,LastTimestamp:2026-02-24 09:54:59.308572221 +0000 UTC m=+3.765094764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.853907 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189726207a559e7f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.326819967 +0000 UTC m=+3.783342520,LastTimestamp:2026-02-24 09:54:59.326819967 +0000 UTC m=+3.783342520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.860991 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189726207a6a3b0f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.328170767 +0000 UTC m=+3.784693340,LastTimestamp:2026-02-24 09:54:59.328170767 +0000 UTC m=+3.784693340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.868336 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189726207d6dde38 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.378740792 +0000 UTC m=+3.835263335,LastTimestamp:2026-02-24 09:54:59.378740792 +0000 UTC m=+3.835263335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.875979 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18972620875e6a3e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.545500222 +0000 UTC m=+4.002022765,LastTimestamp:2026-02-24 09:54:59.545500222 +0000 UTC m=+4.002022765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.884006 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897262087f6f684 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.555497604 +0000 UTC m=+4.012020147,LastTimestamp:2026-02-24 09:54:59.555497604 +0000 UTC m=+4.012020147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.892470 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189726208b0a79a2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.607108002 +0000 UTC m=+4.063630555,LastTimestamp:2026-02-24 09:54:59.607108002 +0000 UTC m=+4.063630555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.899678 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189726208bc01298 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.619009176 +0000 UTC m=+4.075531739,LastTimestamp:2026-02-24 09:54:59.619009176 +0000 UTC m=+4.075531739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.909664 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972620ba61fe25 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:00.401372709 +0000 UTC m=+4.857895282,LastTimestamp:2026-02-24 09:55:00.401372709 +0000 UTC m=+4.857895282,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.917124 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972620cad3d905 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:00.677269765 +0000 UTC m=+5.133792338,LastTimestamp:2026-02-24 09:55:00.677269765 +0000 UTC m=+5.133792338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.924416 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972620cba38a69 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:00.690881129 +0000 UTC m=+5.147403702,LastTimestamp:2026-02-24 09:55:00.690881129 +0000 UTC m=+5.147403702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.931351 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972620cbc014f0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:00.6927516 +0000 UTC m=+5.149274183,LastTimestamp:2026-02-24 09:55:00.6927516 +0000 UTC m=+5.149274183,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.937810 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972620db92e60f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:00.958225935 +0000 UTC m=+5.414748518,LastTimestamp:2026-02-24 09:55:00.958225935 +0000 UTC m=+5.414748518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.944976 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972620dc8b9b21 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:00.974525217 +0000 UTC m=+5.431047800,LastTimestamp:2026-02-24 09:55:00.974525217 +0000 UTC m=+5.431047800,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.952168 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972620dca2f7e4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:00.976056292 +0000 UTC m=+5.432578865,LastTimestamp:2026-02-24 09:55:00.976056292 +0000 UTC m=+5.432578865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.959040 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972620edd22ebb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:01.264363195 +0000 UTC m=+5.720885778,LastTimestamp:2026-02-24 09:55:01.264363195 +0000 UTC m=+5.720885778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.965853 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972620eed711f5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:01.281460725 +0000 UTC m=+5.737983308,LastTimestamp:2026-02-24 09:55:01.281460725 +0000 UTC m=+5.737983308,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.973526 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972620eeeafc00 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:01.282765824 +0000 UTC m=+5.739288407,LastTimestamp:2026-02-24 09:55:01.282765824 +0000 UTC m=+5.739288407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.981221 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972620fd983049 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:01.528997961 +0000 UTC m=+5.985520514,LastTimestamp:2026-02-24 09:55:01.528997961 +0000 UTC m=+5.985520514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.988153 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972620feebd19e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:01.551255966 +0000 UTC m=+6.007778519,LastTimestamp:2026-02-24 09:55:01.551255966 +0000 UTC m=+6.007778519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:20 crc kubenswrapper[4755]: E0224 09:55:20.994818 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18972620ff044950 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:01.552859472 +0000 UTC m=+6.009382015,LastTimestamp:2026-02-24 09:55:01.552859472 +0000 UTC m=+6.009382015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:21 crc kubenswrapper[4755]: E0224 09:55:21.001162 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189726210ddabc8b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:01.801794699 +0000 UTC m=+6.258317252,LastTimestamp:2026-02-24 09:55:01.801794699 +0000 UTC m=+6.258317252,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:21 crc kubenswrapper[4755]: E0224 09:55:21.008550 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189726210ee44388 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:01.819196296 +0000 UTC m=+6.275718879,LastTimestamp:2026-02-24 09:55:01.819196296 +0000 UTC m=+6.275718879,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:21 crc kubenswrapper[4755]: E0224 09:55:21.016129 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 09:55:21 crc kubenswrapper[4755]: &Event{ObjectMeta:{kube-controller-manager-crc.1897262137863227 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 24 09:55:21 crc kubenswrapper[4755]: body: Feb 24 09:55:21 crc kubenswrapper[4755]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:02.500897319 +0000 UTC m=+6.957419902,LastTimestamp:2026-02-24 09:55:02.500897319 +0000 UTC m=+6.957419902,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:55:21 crc kubenswrapper[4755]: > Feb 24 09:55:21 crc kubenswrapper[4755]: E0224 09:55:21.019125 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18972621378747bd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:02.500968381 +0000 UTC m=+6.957490964,LastTimestamp:2026-02-24 09:55:02.500968381 +0000 UTC m=+6.957490964,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:21 crc kubenswrapper[4755]: E0224 09:55:21.024492 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 24 09:55:21 crc kubenswrapper[4755]: &Event{ObjectMeta:{kube-apiserver-crc.189726230cab9ed1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 24 09:55:21 crc kubenswrapper[4755]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 09:55:21 crc kubenswrapper[4755]: Feb 24 09:55:21 crc kubenswrapper[4755]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:10.371864273 +0000 UTC m=+14.828386816,LastTimestamp:2026-02-24 09:55:10.371864273 +0000 UTC m=+14.828386816,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:55:21 crc kubenswrapper[4755]: > Feb 24 09:55:21 crc kubenswrapper[4755]: E0224 09:55:21.031259 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189726230cac2e2e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:10.371900974 +0000 UTC m=+14.828423517,LastTimestamp:2026-02-24 09:55:10.371900974 +0000 UTC m=+14.828423517,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:21 crc kubenswrapper[4755]: E0224 09:55:21.038277 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189726230cab9ed1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 24 09:55:21 crc kubenswrapper[4755]: &Event{ObjectMeta:{kube-apiserver-crc.189726230cab9ed1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 24 09:55:21 crc kubenswrapper[4755]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 09:55:21 crc kubenswrapper[4755]: Feb 24 09:55:21 crc kubenswrapper[4755]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:10.371864273 +0000 UTC m=+14.828386816,LastTimestamp:2026-02-24 09:55:10.377443545 +0000 UTC m=+14.833966098,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:55:21 crc kubenswrapper[4755]: > Feb 24 09:55:21 crc kubenswrapper[4755]: E0224 09:55:21.048361 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189726230cac2e2e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189726230cac2e2e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:10.371900974 +0000 UTC m=+14.828423517,LastTimestamp:2026-02-24 09:55:10.377505337 +0000 UTC m=+14.834027890,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:21 crc kubenswrapper[4755]: E0224 09:55:21.058179 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189726207a6a3b0f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189726207a6a3b0f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.328170767 +0000 UTC m=+3.784693340,LastTimestamp:2026-02-24 09:55:10.439349998 +0000 UTC m=+14.895872541,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:21 crc kubenswrapper[4755]: E0224 09:55:21.065294 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18972620875e6a3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18972620875e6a3e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.545500222 +0000 UTC m=+4.002022765,LastTimestamp:2026-02-24 09:55:10.628599293 +0000 UTC m=+15.085121846,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:21 crc kubenswrapper[4755]: E0224 09:55:21.072733 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897262087f6f684\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897262087f6f684 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:59.555497604 +0000 UTC m=+4.012020147,LastTimestamp:2026-02-24 09:55:10.638595353 +0000 UTC m=+15.095117936,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:21 crc kubenswrapper[4755]: E0224 09:55:21.081045 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 09:55:21 crc kubenswrapper[4755]: &Event{ObjectMeta:{kube-controller-manager-crc.189726238ba7f246 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 09:55:21 crc kubenswrapper[4755]: body: Feb 24 09:55:21 crc kubenswrapper[4755]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:12.502329926 +0000 UTC m=+16.958852509,LastTimestamp:2026-02-24 09:55:12.502329926 +0000 UTC m=+16.958852509,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:55:21 crc kubenswrapper[4755]: > Feb 24 09:55:21 crc kubenswrapper[4755]: E0224 09:55:21.089586 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189726238ba9a5b8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:12.5024414 +0000 UTC m=+16.958963983,LastTimestamp:2026-02-24 09:55:12.5024414 +0000 UTC m=+16.958963983,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:21 crc kubenswrapper[4755]: I0224 09:55:21.237825 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:22 crc kubenswrapper[4755]: I0224 09:55:22.239858 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:22 crc kubenswrapper[4755]: I0224 09:55:22.502904 4755 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:55:22 crc kubenswrapper[4755]: I0224 09:55:22.502986 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:55:22 crc kubenswrapper[4755]: I0224 09:55:22.503056 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:55:22 crc kubenswrapper[4755]: I0224 09:55:22.503273 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:22 crc kubenswrapper[4755]: I0224 09:55:22.504739 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:22 crc kubenswrapper[4755]: I0224 09:55:22.504906 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:22 crc kubenswrapper[4755]: I0224 09:55:22.504939 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:22 crc kubenswrapper[4755]: I0224 09:55:22.505623 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 24 09:55:22 crc kubenswrapper[4755]: I0224 09:55:22.505930 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f" gracePeriod=30 Feb 24 09:55:22 crc kubenswrapper[4755]: E0224 09:55:22.512537 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189726238ba7f246\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 09:55:22 crc kubenswrapper[4755]: &Event{ObjectMeta:{kube-controller-manager-crc.189726238ba7f246 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 09:55:22 crc kubenswrapper[4755]: body: Feb 24 09:55:22 crc kubenswrapper[4755]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:12.502329926 +0000 UTC m=+16.958852509,LastTimestamp:2026-02-24 09:55:22.502965646 +0000 UTC m=+26.959488229,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:55:22 crc kubenswrapper[4755]: > Feb 24 09:55:22 crc kubenswrapper[4755]: E0224 09:55:22.520995 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189726238ba9a5b8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189726238ba9a5b8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:12.5024414 +0000 UTC m=+16.958963983,LastTimestamp:2026-02-24 09:55:22.503024208 +0000 UTC m=+26.959546791,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:22 crc kubenswrapper[4755]: E0224 09:55:22.528836 4755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18972625dfea4375 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:22.505896821 +0000 UTC m=+26.962419394,LastTimestamp:2026-02-24 09:55:22.505896821 +0000 UTC m=+26.962419394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:22 crc kubenswrapper[4755]: E0224 09:55:22.638167 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897262009e5d1af\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897262009e5d1af openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.440444847 +0000 UTC m=+1.896967390,LastTimestamp:2026-02-24 09:55:22.630889684 +0000 UTC m=+27.087412267,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:22 crc kubenswrapper[4755]: E0224 09:55:22.858913 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189726201bfbf5b1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189726201bfbf5b1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.743885745 +0000 UTC m=+2.200408328,LastTimestamp:2026-02-24 09:55:22.854083853 +0000 UTC m=+27.310606396,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:22 crc kubenswrapper[4755]: E0224 09:55:22.871152 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189726201cbddc7d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189726201cbddc7d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:54:57.756593277 +0000 UTC m=+2.213115830,LastTimestamp:2026-02-24 09:55:22.864413582 +0000 UTC m=+27.320936135,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:23 crc kubenswrapper[4755]: I0224 09:55:23.240853 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:23 crc kubenswrapper[4755]: I0224 09:55:23.492855 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 09:55:23 crc kubenswrapper[4755]: I0224 09:55:23.493478 4755 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f" exitCode=255 Feb 24 09:55:23 crc kubenswrapper[4755]: I0224 09:55:23.493531 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f"} Feb 24 09:55:23 crc kubenswrapper[4755]: I0224 09:55:23.493577 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e"} Feb 24 09:55:23 crc kubenswrapper[4755]: I0224 09:55:23.493709 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:23 crc kubenswrapper[4755]: I0224 09:55:23.494926 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:23 crc kubenswrapper[4755]: I0224 09:55:23.495175 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:23 crc kubenswrapper[4755]: I0224 09:55:23.495335 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:23 crc kubenswrapper[4755]: I0224 09:55:23.772234 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:23 crc kubenswrapper[4755]: I0224 09:55:23.773888 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:23 crc kubenswrapper[4755]: I0224 09:55:23.773937 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:23 crc kubenswrapper[4755]: I0224 09:55:23.773951 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:23 crc kubenswrapper[4755]: I0224 09:55:23.773982 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:55:23 crc kubenswrapper[4755]: E0224 09:55:23.780880 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:55:23 crc kubenswrapper[4755]: E0224 09:55:23.787156 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:55:23 crc kubenswrapper[4755]: W0224 09:55:23.876529 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 24 09:55:23 crc kubenswrapper[4755]: E0224 09:55:23.876763 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 09:55:24 crc kubenswrapper[4755]: I0224 09:55:24.239707 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:25 crc kubenswrapper[4755]: I0224 09:55:25.239466 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:25 crc kubenswrapper[4755]: I0224 09:55:25.313235 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:55:25 crc kubenswrapper[4755]: I0224 09:55:25.313470 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:25 crc kubenswrapper[4755]: I0224 09:55:25.315146 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:25 crc kubenswrapper[4755]: I0224 09:55:25.315201 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:25 crc kubenswrapper[4755]: I0224 09:55:25.315219 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:26 crc kubenswrapper[4755]: I0224 09:55:26.239751 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:26 crc kubenswrapper[4755]: W0224 09:55:26.325006 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 24 09:55:26 crc kubenswrapper[4755]: E0224 09:55:26.325131 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 09:55:26 crc kubenswrapper[4755]: E0224 09:55:26.403768 4755 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:55:27 crc kubenswrapper[4755]: I0224 09:55:27.238646 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:27 crc kubenswrapper[4755]: W0224 09:55:27.351008 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 24 09:55:27 crc kubenswrapper[4755]: E0224 09:55:27.351112 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 09:55:28 crc kubenswrapper[4755]: W0224 09:55:28.039365 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:28 crc kubenswrapper[4755]: E0224 09:55:28.039441 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 09:55:28 crc kubenswrapper[4755]: I0224 09:55:28.240731 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:29 crc kubenswrapper[4755]: I0224 09:55:29.237836 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:29 crc kubenswrapper[4755]: I0224 09:55:29.501455 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:55:29 crc kubenswrapper[4755]: I0224 09:55:29.501722 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:29 crc kubenswrapper[4755]: I0224 09:55:29.503440 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:29 crc kubenswrapper[4755]: I0224 09:55:29.503488 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:29 crc kubenswrapper[4755]: I0224 09:55:29.503498 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:30 crc kubenswrapper[4755]: I0224 09:55:30.239809 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:30 crc kubenswrapper[4755]: I0224 09:55:30.781919 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:30 crc kubenswrapper[4755]: I0224 09:55:30.783686 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:30 crc kubenswrapper[4755]: I0224 09:55:30.783745 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:30 crc kubenswrapper[4755]: I0224 09:55:30.783765 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:30 crc kubenswrapper[4755]: I0224 09:55:30.783803 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:55:30 crc kubenswrapper[4755]: E0224 09:55:30.789638 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:55:30 crc kubenswrapper[4755]: E0224 09:55:30.790126 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:55:31 crc kubenswrapper[4755]: I0224 09:55:31.240616 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:32 crc kubenswrapper[4755]: I0224 09:55:32.240169 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:32 crc kubenswrapper[4755]: I0224 09:55:32.502169 4755 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:55:32 crc kubenswrapper[4755]: I0224 09:55:32.502267 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:55:32 crc kubenswrapper[4755]: E0224 09:55:32.507258 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189726238ba7f246\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 09:55:32 crc kubenswrapper[4755]: &Event{ObjectMeta:{kube-controller-manager-crc.189726238ba7f246 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 09:55:32 crc kubenswrapper[4755]: body: Feb 24 09:55:32 crc kubenswrapper[4755]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:12.502329926 +0000 UTC m=+16.958852509,LastTimestamp:2026-02-24 09:55:32.502237281 +0000 UTC m=+36.958759834,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:55:32 crc kubenswrapper[4755]: > Feb 24 09:55:32 crc kubenswrapper[4755]: E0224 09:55:32.514999 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189726238ba9a5b8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189726238ba9a5b8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:12.5024414 +0000 UTC m=+16.958963983,LastTimestamp:2026-02-24 09:55:32.502296812 +0000 UTC m=+36.958819365,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:55:33 crc kubenswrapper[4755]: I0224 09:55:33.239754 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:34 crc kubenswrapper[4755]: I0224 09:55:34.241243 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:34 crc kubenswrapper[4755]: I0224 09:55:34.315958 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:34 crc kubenswrapper[4755]: I0224 09:55:34.318163 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:34 crc kubenswrapper[4755]: I0224 09:55:34.318223 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:34 crc kubenswrapper[4755]: I0224 09:55:34.318242 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:34 crc kubenswrapper[4755]: I0224 09:55:34.319111 4755 scope.go:117] "RemoveContainer" containerID="60af07f95b287b6fdda86078dff5864416379a816e01ac47eddaeb6c27ebb2a2" Feb 24 09:55:35 crc kubenswrapper[4755]: I0224 09:55:35.238519 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:35 crc kubenswrapper[4755]: I0224 09:55:35.530042 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 09:55:35 crc kubenswrapper[4755]: I0224 09:55:35.530808 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 09:55:35 crc kubenswrapper[4755]: I0224 09:55:35.533798 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1d249ad863bc8357b194b97702f7e541b3d7d51179143fd8cca154f5e873e826" exitCode=255 Feb 24 09:55:35 crc kubenswrapper[4755]: I0224 09:55:35.533852 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1d249ad863bc8357b194b97702f7e541b3d7d51179143fd8cca154f5e873e826"} Feb 24 09:55:35 crc kubenswrapper[4755]: I0224 09:55:35.533900 4755 scope.go:117] "RemoveContainer" containerID="60af07f95b287b6fdda86078dff5864416379a816e01ac47eddaeb6c27ebb2a2" Feb 24 09:55:35 crc kubenswrapper[4755]: I0224 09:55:35.534190 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:35 crc kubenswrapper[4755]: I0224 09:55:35.536248 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:35 crc kubenswrapper[4755]: I0224 09:55:35.536326 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:35 crc kubenswrapper[4755]: I0224 09:55:35.536349 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:35 crc kubenswrapper[4755]: I0224 09:55:35.537218 4755 scope.go:117] "RemoveContainer" containerID="1d249ad863bc8357b194b97702f7e541b3d7d51179143fd8cca154f5e873e826" Feb 24 09:55:35 crc kubenswrapper[4755]: E0224 09:55:35.537590 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:55:36 crc kubenswrapper[4755]: I0224 09:55:36.238528 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:36 crc kubenswrapper[4755]: E0224 09:55:36.404003 4755 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:55:36 crc kubenswrapper[4755]: I0224 09:55:36.538255 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 09:55:37 crc kubenswrapper[4755]: I0224 09:55:37.240721 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:37 crc kubenswrapper[4755]: I0224 09:55:37.790690 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:37 crc kubenswrapper[4755]: I0224 09:55:37.792439 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:37 crc kubenswrapper[4755]: I0224 09:55:37.792499 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:37 crc kubenswrapper[4755]: I0224 09:55:37.792519 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:37 crc kubenswrapper[4755]: I0224 09:55:37.792556 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:55:37 crc kubenswrapper[4755]: E0224 09:55:37.797803 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:55:37 crc kubenswrapper[4755]: E0224 09:55:37.798344 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:55:38 crc kubenswrapper[4755]: I0224 09:55:38.239703 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:39 crc kubenswrapper[4755]: I0224 09:55:39.239303 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:40 crc kubenswrapper[4755]: I0224 09:55:40.240656 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:40 crc kubenswrapper[4755]: I0224 09:55:40.506424 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:55:40 crc kubenswrapper[4755]: I0224 09:55:40.506667 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:40 crc kubenswrapper[4755]: I0224 09:55:40.508353 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:40 crc kubenswrapper[4755]: I0224 09:55:40.508426 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:40 crc kubenswrapper[4755]: I0224 09:55:40.508448 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:40 crc kubenswrapper[4755]: I0224 09:55:40.509319 4755 scope.go:117] "RemoveContainer" containerID="1d249ad863bc8357b194b97702f7e541b3d7d51179143fd8cca154f5e873e826" Feb 24 09:55:40 crc kubenswrapper[4755]: E0224 09:55:40.509653 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:55:41 crc kubenswrapper[4755]: I0224 09:55:41.239007 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:42 crc kubenswrapper[4755]: I0224 09:55:42.241490 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:42 crc kubenswrapper[4755]: I0224 09:55:42.502241 4755 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:55:42 crc kubenswrapper[4755]: I0224 09:55:42.502307 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:55:42 crc kubenswrapper[4755]: E0224 09:55:42.506767 4755 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189726238ba7f246\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 09:55:42 crc kubenswrapper[4755]: &Event{ObjectMeta:{kube-controller-manager-crc.189726238ba7f246 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 09:55:42 crc kubenswrapper[4755]: body: Feb 24 09:55:42 crc kubenswrapper[4755]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:55:12.502329926 +0000 UTC m=+16.958852509,LastTimestamp:2026-02-24 09:55:42.502289455 +0000 UTC m=+46.958812008,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:55:42 crc kubenswrapper[4755]: > Feb 24 09:55:43 crc kubenswrapper[4755]: I0224 09:55:43.239550 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:43 crc kubenswrapper[4755]: I0224 09:55:43.906542 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:55:43 crc kubenswrapper[4755]: I0224 09:55:43.906838 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:43 crc kubenswrapper[4755]: I0224 09:55:43.908585 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:43 crc kubenswrapper[4755]: I0224 09:55:43.908666 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:43 crc kubenswrapper[4755]: I0224 09:55:43.908684 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:43 crc kubenswrapper[4755]: I0224 09:55:43.909601 4755 scope.go:117] "RemoveContainer" containerID="1d249ad863bc8357b194b97702f7e541b3d7d51179143fd8cca154f5e873e826" Feb 24 09:55:43 crc kubenswrapper[4755]: E0224 09:55:43.909890 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:55:44 crc kubenswrapper[4755]: I0224 09:55:44.239427 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:44 crc kubenswrapper[4755]: I0224 09:55:44.798859 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:44 crc kubenswrapper[4755]: I0224 09:55:44.800578 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:44 crc kubenswrapper[4755]: I0224 09:55:44.800644 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:44 crc kubenswrapper[4755]: I0224 09:55:44.800665 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:44 crc kubenswrapper[4755]: I0224 09:55:44.800704 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:55:44 crc kubenswrapper[4755]: E0224 09:55:44.807048 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:55:44 crc kubenswrapper[4755]: E0224 09:55:44.807729 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:55:45 crc kubenswrapper[4755]: I0224 09:55:45.239950 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:46 crc kubenswrapper[4755]: W0224 09:55:46.188849 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 24 09:55:46 crc kubenswrapper[4755]: E0224 09:55:46.188906 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 09:55:46 crc kubenswrapper[4755]: I0224 09:55:46.237680 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:46 crc kubenswrapper[4755]: E0224 09:55:46.404206 4755 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:55:46 crc kubenswrapper[4755]: I0224 09:55:46.405824 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:55:46 crc kubenswrapper[4755]: I0224 09:55:46.406061 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:46 crc kubenswrapper[4755]: I0224 09:55:46.407316 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:46 crc kubenswrapper[4755]: I0224 09:55:46.407381 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:46 crc kubenswrapper[4755]: I0224 09:55:46.407392 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:46 crc kubenswrapper[4755]: W0224 09:55:46.919947 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 24 09:55:46 crc kubenswrapper[4755]: E0224 09:55:46.920034 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 09:55:47 crc kubenswrapper[4755]: I0224 09:55:47.238130 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:48 crc kubenswrapper[4755]: I0224 09:55:48.237467 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:48 crc kubenswrapper[4755]: W0224 09:55:48.876848 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:48 crc kubenswrapper[4755]: E0224 09:55:48.876897 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 09:55:49 crc kubenswrapper[4755]: I0224 09:55:49.240146 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:49 crc kubenswrapper[4755]: I0224 09:55:49.507759 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:55:49 crc kubenswrapper[4755]: I0224 09:55:49.508011 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:49 crc kubenswrapper[4755]: I0224 09:55:49.509634 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:49 crc kubenswrapper[4755]: I0224 09:55:49.509711 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:49 crc kubenswrapper[4755]: I0224 09:55:49.509727 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:49 crc kubenswrapper[4755]: I0224 09:55:49.514222 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:55:49 crc kubenswrapper[4755]: I0224 09:55:49.576791 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:49 crc kubenswrapper[4755]: I0224 09:55:49.578463 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:49 crc kubenswrapper[4755]: I0224 09:55:49.578523 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:49 crc kubenswrapper[4755]: I0224 09:55:49.578538 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:49 crc kubenswrapper[4755]: W0224 09:55:49.897835 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 24 09:55:49 crc kubenswrapper[4755]: E0224 09:55:49.897899 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 09:55:50 crc kubenswrapper[4755]: I0224 09:55:50.239061 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:51 crc kubenswrapper[4755]: I0224 09:55:51.238047 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:51 crc kubenswrapper[4755]: I0224 09:55:51.807291 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:51 crc kubenswrapper[4755]: I0224 09:55:51.808749 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:51 crc kubenswrapper[4755]: I0224 09:55:51.808814 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:51 crc kubenswrapper[4755]: I0224 09:55:51.808836 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:51 crc kubenswrapper[4755]: I0224 09:55:51.808882 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:55:51 crc kubenswrapper[4755]: E0224 09:55:51.814444 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:55:51 crc kubenswrapper[4755]: E0224 09:55:51.815443 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:55:52 crc kubenswrapper[4755]: I0224 09:55:52.239426 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:53 crc kubenswrapper[4755]: I0224 09:55:53.236854 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:54 crc kubenswrapper[4755]: I0224 09:55:54.238613 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:55 crc kubenswrapper[4755]: I0224 09:55:55.239854 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:55 crc kubenswrapper[4755]: I0224 09:55:55.315629 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:55 crc kubenswrapper[4755]: I0224 09:55:55.318138 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:55 crc kubenswrapper[4755]: I0224 09:55:55.318213 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:55 crc kubenswrapper[4755]: I0224 09:55:55.318235 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:55 crc kubenswrapper[4755]: I0224 09:55:55.319240 4755 scope.go:117] "RemoveContainer" containerID="1d249ad863bc8357b194b97702f7e541b3d7d51179143fd8cca154f5e873e826" Feb 24 09:55:55 crc kubenswrapper[4755]: E0224 09:55:55.319572 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:55:56 crc kubenswrapper[4755]: I0224 09:55:56.241214 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:56 crc kubenswrapper[4755]: E0224 09:55:56.404892 4755 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:55:57 crc kubenswrapper[4755]: I0224 09:55:57.239679 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:58 crc kubenswrapper[4755]: I0224 09:55:58.239889 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:55:58 crc kubenswrapper[4755]: I0224 09:55:58.815542 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:55:58 crc kubenswrapper[4755]: I0224 09:55:58.820687 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:55:58 crc kubenswrapper[4755]: I0224 09:55:58.820776 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:55:58 crc kubenswrapper[4755]: I0224 09:55:58.820798 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:55:58 crc kubenswrapper[4755]: I0224 09:55:58.820857 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:55:58 crc kubenswrapper[4755]: E0224 09:55:58.822997 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:55:58 crc kubenswrapper[4755]: E0224 09:55:58.830346 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:55:59 crc kubenswrapper[4755]: I0224 09:55:59.240766 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:56:00 crc kubenswrapper[4755]: I0224 09:56:00.239487 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:56:01 crc kubenswrapper[4755]: I0224 09:56:01.234851 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:56:01 crc kubenswrapper[4755]: I0224 09:56:01.325848 4755 csr.go:261] certificate signing request csr-t8jzt is approved, waiting to be issued Feb 24 09:56:01 crc kubenswrapper[4755]: I0224 09:56:01.336164 4755 csr.go:257] certificate signing request csr-t8jzt is issued Feb 24 09:56:01 crc kubenswrapper[4755]: I0224 09:56:01.454136 4755 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 24 09:56:02 crc kubenswrapper[4755]: I0224 09:56:02.068757 4755 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 24 09:56:02 crc kubenswrapper[4755]: I0224 09:56:02.338199 4755 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-25 19:36:44.375033627 +0000 UTC Feb 24 09:56:02 crc kubenswrapper[4755]: I0224 09:56:02.338242 4755 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7305h40m42.03679395s for next certificate rotation Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.831129 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.832746 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.832811 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.832826 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.832960 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.843870 4755 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.844265 4755 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 24 09:56:05 crc kubenswrapper[4755]: E0224 09:56:05.844301 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.848674 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.848728 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.848742 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.848763 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.848778 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:05Z","lastTransitionTime":"2026-02-24T09:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:05 crc kubenswrapper[4755]: E0224 09:56:05.869909 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.879710 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.879752 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.879763 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.879784 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.879798 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:05Z","lastTransitionTime":"2026-02-24T09:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:05 crc kubenswrapper[4755]: E0224 09:56:05.895161 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.906612 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.906676 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.906693 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.906712 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.906724 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:05Z","lastTransitionTime":"2026-02-24T09:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:05 crc kubenswrapper[4755]: E0224 09:56:05.920903 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.930844 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.930912 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.930932 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.930970 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:05 crc kubenswrapper[4755]: I0224 09:56:05.930988 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:05Z","lastTransitionTime":"2026-02-24T09:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:05 crc kubenswrapper[4755]: E0224 09:56:05.942629 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:05 crc kubenswrapper[4755]: E0224 09:56:05.942760 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:56:05 crc kubenswrapper[4755]: E0224 09:56:05.942790 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:06 crc kubenswrapper[4755]: E0224 09:56:06.042866 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:06 crc kubenswrapper[4755]: E0224 09:56:06.143229 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:06 crc kubenswrapper[4755]: E0224 09:56:06.243971 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:06 crc kubenswrapper[4755]: E0224 09:56:06.344930 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:06 crc kubenswrapper[4755]: E0224 09:56:06.405022 4755 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:56:06 crc kubenswrapper[4755]: E0224 09:56:06.445617 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:06 crc kubenswrapper[4755]: E0224 09:56:06.546700 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:06 crc kubenswrapper[4755]: E0224 09:56:06.647859 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:06 crc kubenswrapper[4755]: E0224 09:56:06.748866 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:06 crc kubenswrapper[4755]: E0224 09:56:06.849043 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:06 crc kubenswrapper[4755]: E0224 09:56:06.949621 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:07 crc kubenswrapper[4755]: E0224 09:56:07.050761 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:07 crc kubenswrapper[4755]: E0224 09:56:07.151839 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:07 crc kubenswrapper[4755]: E0224 09:56:07.252898 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:07 crc kubenswrapper[4755]: E0224 09:56:07.353368 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:07 crc kubenswrapper[4755]: E0224 09:56:07.454493 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:07 crc kubenswrapper[4755]: E0224 09:56:07.555598 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:07 crc kubenswrapper[4755]: E0224 09:56:07.656523 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:07 crc kubenswrapper[4755]: E0224 09:56:07.757031 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:07 crc kubenswrapper[4755]: E0224 09:56:07.857988 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:07 crc kubenswrapper[4755]: E0224 09:56:07.958147 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:08 crc kubenswrapper[4755]: E0224 09:56:08.058490 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:08 crc kubenswrapper[4755]: E0224 09:56:08.159581 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:08 crc kubenswrapper[4755]: E0224 09:56:08.260223 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:08 crc kubenswrapper[4755]: E0224 09:56:08.361313 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:08 crc kubenswrapper[4755]: E0224 09:56:08.462465 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:08 crc kubenswrapper[4755]: E0224 09:56:08.563561 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:08 crc kubenswrapper[4755]: E0224 09:56:08.664145 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:08 crc kubenswrapper[4755]: E0224 09:56:08.765229 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:08 crc kubenswrapper[4755]: E0224 09:56:08.865418 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:08 crc kubenswrapper[4755]: E0224 09:56:08.966434 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:09 crc kubenswrapper[4755]: E0224 09:56:09.067303 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:09 crc kubenswrapper[4755]: E0224 09:56:09.168045 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:09 crc kubenswrapper[4755]: E0224 09:56:09.268868 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:09 crc kubenswrapper[4755]: I0224 09:56:09.316306 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:56:09 crc kubenswrapper[4755]: I0224 09:56:09.317574 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:09 crc kubenswrapper[4755]: I0224 09:56:09.317612 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:09 crc kubenswrapper[4755]: I0224 09:56:09.317623 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:09 crc kubenswrapper[4755]: I0224 09:56:09.318242 4755 scope.go:117] "RemoveContainer" containerID="1d249ad863bc8357b194b97702f7e541b3d7d51179143fd8cca154f5e873e826" Feb 24 09:56:09 crc kubenswrapper[4755]: E0224 09:56:09.369427 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:09 crc kubenswrapper[4755]: E0224 09:56:09.470039 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:09 crc kubenswrapper[4755]: E0224 09:56:09.571412 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:09 crc kubenswrapper[4755]: I0224 09:56:09.634996 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 09:56:09 crc kubenswrapper[4755]: I0224 09:56:09.636824 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6"} Feb 24 09:56:09 crc kubenswrapper[4755]: I0224 09:56:09.636996 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:56:09 crc kubenswrapper[4755]: I0224 09:56:09.638325 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:09 crc kubenswrapper[4755]: I0224 09:56:09.638386 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:09 crc kubenswrapper[4755]: I0224 09:56:09.638407 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:09 crc kubenswrapper[4755]: E0224 09:56:09.671539 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:09 crc kubenswrapper[4755]: E0224 09:56:09.771742 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:09 crc kubenswrapper[4755]: E0224 09:56:09.872202 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:09 crc kubenswrapper[4755]: E0224 09:56:09.973361 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:10 crc kubenswrapper[4755]: E0224 09:56:10.073813 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:10 crc kubenswrapper[4755]: E0224 09:56:10.174414 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:10 crc kubenswrapper[4755]: E0224 09:56:10.274520 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:10 crc kubenswrapper[4755]: E0224 09:56:10.375563 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:10 crc kubenswrapper[4755]: E0224 09:56:10.475663 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:10 crc kubenswrapper[4755]: I0224 09:56:10.506568 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:56:10 crc kubenswrapper[4755]: E0224 09:56:10.576793 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:10 crc kubenswrapper[4755]: I0224 09:56:10.641641 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 09:56:10 crc kubenswrapper[4755]: I0224 09:56:10.642665 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 09:56:10 crc kubenswrapper[4755]: I0224 09:56:10.645151 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6" exitCode=255 Feb 24 09:56:10 crc kubenswrapper[4755]: I0224 09:56:10.645209 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6"} Feb 24 09:56:10 crc kubenswrapper[4755]: I0224 09:56:10.645268 4755 scope.go:117] "RemoveContainer" containerID="1d249ad863bc8357b194b97702f7e541b3d7d51179143fd8cca154f5e873e826" Feb 24 09:56:10 crc kubenswrapper[4755]: I0224 09:56:10.645339 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:56:10 crc kubenswrapper[4755]: I0224 09:56:10.646477 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:10 crc kubenswrapper[4755]: I0224 09:56:10.646524 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:10 crc kubenswrapper[4755]: I0224 09:56:10.646540 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:10 crc kubenswrapper[4755]: I0224 09:56:10.647927 4755 scope.go:117] "RemoveContainer" containerID="efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6" Feb 24 09:56:10 crc kubenswrapper[4755]: E0224 09:56:10.648324 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:56:10 crc kubenswrapper[4755]: E0224 09:56:10.677846 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:10 crc kubenswrapper[4755]: E0224 09:56:10.778327 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:10 crc kubenswrapper[4755]: E0224 09:56:10.878854 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:10 crc kubenswrapper[4755]: E0224 09:56:10.979946 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:11 crc kubenswrapper[4755]: E0224 09:56:11.080959 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:11 crc kubenswrapper[4755]: E0224 09:56:11.181931 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:11 crc kubenswrapper[4755]: E0224 09:56:11.282134 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:11 crc kubenswrapper[4755]: E0224 09:56:11.382555 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:11 crc kubenswrapper[4755]: E0224 09:56:11.483250 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:11 crc kubenswrapper[4755]: E0224 09:56:11.583949 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:11 crc kubenswrapper[4755]: I0224 09:56:11.650727 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 09:56:11 crc kubenswrapper[4755]: I0224 09:56:11.654224 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:56:11 crc kubenswrapper[4755]: I0224 09:56:11.655663 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:11 crc kubenswrapper[4755]: I0224 09:56:11.655719 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:11 crc kubenswrapper[4755]: I0224 09:56:11.655742 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:11 crc kubenswrapper[4755]: I0224 09:56:11.656830 4755 scope.go:117] "RemoveContainer" containerID="efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6" Feb 24 09:56:11 crc kubenswrapper[4755]: E0224 09:56:11.657213 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:56:11 crc kubenswrapper[4755]: E0224 09:56:11.684746 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:11 crc kubenswrapper[4755]: E0224 09:56:11.785323 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:11 crc kubenswrapper[4755]: E0224 09:56:11.885419 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:11 crc kubenswrapper[4755]: E0224 09:56:11.986507 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:12 crc kubenswrapper[4755]: E0224 09:56:12.087451 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:12 crc kubenswrapper[4755]: E0224 09:56:12.187962 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:12 crc kubenswrapper[4755]: E0224 09:56:12.288731 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:12 crc kubenswrapper[4755]: E0224 09:56:12.389776 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:12 crc kubenswrapper[4755]: E0224 09:56:12.490967 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:12 crc kubenswrapper[4755]: E0224 09:56:12.591720 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:12 crc kubenswrapper[4755]: E0224 09:56:12.692621 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:12 crc kubenswrapper[4755]: E0224 09:56:12.793035 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:12 crc kubenswrapper[4755]: E0224 09:56:12.893740 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:12 crc kubenswrapper[4755]: E0224 09:56:12.994801 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:13 crc kubenswrapper[4755]: E0224 09:56:13.095048 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:13 crc kubenswrapper[4755]: E0224 09:56:13.195744 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:13 crc kubenswrapper[4755]: E0224 09:56:13.295965 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:13 crc kubenswrapper[4755]: E0224 09:56:13.396395 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:13 crc kubenswrapper[4755]: E0224 09:56:13.496793 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:13 crc kubenswrapper[4755]: E0224 09:56:13.597688 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:13 crc kubenswrapper[4755]: E0224 09:56:13.698718 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:13 crc kubenswrapper[4755]: E0224 09:56:13.799406 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:13 crc kubenswrapper[4755]: E0224 09:56:13.899913 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:13 crc kubenswrapper[4755]: I0224 09:56:13.907289 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:56:13 crc kubenswrapper[4755]: I0224 09:56:13.907553 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:56:13 crc kubenswrapper[4755]: I0224 09:56:13.909329 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:13 crc kubenswrapper[4755]: I0224 09:56:13.909396 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:13 crc kubenswrapper[4755]: I0224 09:56:13.909422 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:13 crc kubenswrapper[4755]: I0224 09:56:13.910825 4755 scope.go:117] "RemoveContainer" containerID="efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6" Feb 24 09:56:13 crc kubenswrapper[4755]: E0224 09:56:13.911206 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:56:14 crc kubenswrapper[4755]: E0224 09:56:14.000499 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:14 crc kubenswrapper[4755]: E0224 09:56:14.101326 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:14 crc kubenswrapper[4755]: E0224 09:56:14.201980 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:14 crc kubenswrapper[4755]: E0224 09:56:14.302694 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:14 crc kubenswrapper[4755]: E0224 09:56:14.402894 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:14 crc kubenswrapper[4755]: E0224 09:56:14.503164 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:14 crc kubenswrapper[4755]: E0224 09:56:14.603783 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:14 crc kubenswrapper[4755]: E0224 09:56:14.704859 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:14 crc kubenswrapper[4755]: E0224 09:56:14.805899 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:14 crc kubenswrapper[4755]: E0224 09:56:14.906438 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:15 crc kubenswrapper[4755]: E0224 09:56:15.007168 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:15 crc kubenswrapper[4755]: E0224 09:56:15.107805 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:15 crc kubenswrapper[4755]: E0224 09:56:15.208312 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:15 crc kubenswrapper[4755]: E0224 09:56:15.309112 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:15 crc kubenswrapper[4755]: E0224 09:56:15.409847 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:15 crc kubenswrapper[4755]: E0224 09:56:15.510384 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:15 crc kubenswrapper[4755]: E0224 09:56:15.610510 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:15 crc kubenswrapper[4755]: E0224 09:56:15.710930 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:15 crc kubenswrapper[4755]: E0224 09:56:15.811374 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:15 crc kubenswrapper[4755]: E0224 09:56:15.912300 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.013427 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.102782 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.108243 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.108339 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.108359 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.108384 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.108401 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:16Z","lastTransitionTime":"2026-02-24T09:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.124198 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.134991 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.135039 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.135057 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.135116 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.135143 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:16Z","lastTransitionTime":"2026-02-24T09:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.150914 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.162392 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.162462 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.162489 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.162518 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.162536 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:16Z","lastTransitionTime":"2026-02-24T09:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.179986 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.193248 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.193345 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.193371 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.193421 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.193443 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:16Z","lastTransitionTime":"2026-02-24T09:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.213785 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.214013 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.214056 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.314369 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.315818 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.317938 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.317998 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:16 crc kubenswrapper[4755]: I0224 09:56:16.318019 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.406151 4755 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.414442 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.515358 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.616742 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.717361 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.817499 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:16 crc kubenswrapper[4755]: E0224 09:56:16.918026 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:17 crc kubenswrapper[4755]: E0224 09:56:17.019754 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:17 crc kubenswrapper[4755]: E0224 09:56:17.120648 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:17 crc kubenswrapper[4755]: E0224 09:56:17.221375 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:17 crc kubenswrapper[4755]: E0224 09:56:17.322330 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:17 crc kubenswrapper[4755]: E0224 09:56:17.423297 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:17 crc kubenswrapper[4755]: E0224 09:56:17.524406 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:17 crc kubenswrapper[4755]: E0224 09:56:17.625877 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:17 crc kubenswrapper[4755]: E0224 09:56:17.726936 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:17 crc kubenswrapper[4755]: E0224 09:56:17.827219 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:17 crc kubenswrapper[4755]: E0224 09:56:17.927610 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:18 crc kubenswrapper[4755]: E0224 09:56:18.028318 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:18 crc kubenswrapper[4755]: E0224 09:56:18.129163 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:18 crc kubenswrapper[4755]: E0224 09:56:18.230046 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:18 crc kubenswrapper[4755]: E0224 09:56:18.330339 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:18 crc kubenswrapper[4755]: E0224 09:56:18.431715 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:18 crc kubenswrapper[4755]: E0224 09:56:18.531911 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:18 crc kubenswrapper[4755]: E0224 09:56:18.632670 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:18 crc kubenswrapper[4755]: E0224 09:56:18.733595 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:18 crc kubenswrapper[4755]: E0224 09:56:18.834090 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:18 crc kubenswrapper[4755]: E0224 09:56:18.935503 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:19 crc kubenswrapper[4755]: E0224 09:56:19.036306 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:19 crc kubenswrapper[4755]: E0224 09:56:19.136398 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:19 crc kubenswrapper[4755]: E0224 09:56:19.236834 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:19 crc kubenswrapper[4755]: E0224 09:56:19.337664 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:19 crc kubenswrapper[4755]: E0224 09:56:19.438199 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:19 crc kubenswrapper[4755]: E0224 09:56:19.539320 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:19 crc kubenswrapper[4755]: E0224 09:56:19.640118 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:19 crc kubenswrapper[4755]: E0224 09:56:19.741049 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:19 crc kubenswrapper[4755]: E0224 09:56:19.842253 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:19 crc kubenswrapper[4755]: E0224 09:56:19.942645 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:20 crc kubenswrapper[4755]: E0224 09:56:20.043501 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:20 crc kubenswrapper[4755]: E0224 09:56:20.144545 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:20 crc kubenswrapper[4755]: E0224 09:56:20.245133 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.322277 4755 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.348240 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.348299 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.348323 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.348354 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.348377 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:20Z","lastTransitionTime":"2026-02-24T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.450985 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.451317 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.451487 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.451629 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.451754 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:20Z","lastTransitionTime":"2026-02-24T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.554149 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.554429 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.554492 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.554594 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.554726 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:20Z","lastTransitionTime":"2026-02-24T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.657563 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.657895 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.658018 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.658174 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.658299 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:20Z","lastTransitionTime":"2026-02-24T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.762492 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.762549 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.762567 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.762594 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.762617 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:20Z","lastTransitionTime":"2026-02-24T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.865782 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.865886 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.865911 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.865941 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.865958 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:20Z","lastTransitionTime":"2026-02-24T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.968529 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.968591 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.968608 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.968633 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:20 crc kubenswrapper[4755]: I0224 09:56:20.968650 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:20Z","lastTransitionTime":"2026-02-24T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.072590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.072655 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.072673 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.072700 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.072717 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:21Z","lastTransitionTime":"2026-02-24T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.176706 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.177053 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.177228 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.177546 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.177705 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:21Z","lastTransitionTime":"2026-02-24T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.265734 4755 apiserver.go:52] "Watching apiserver" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.275213 4755 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.276602 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-98t22","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn","openshift-ovn-kubernetes/ovnkube-node-fljft","openshift-multus/multus-additional-cni-plugins-8t77m","openshift-multus/multus-dwm6v","openshift-machine-config-operator/machine-config-daemon-8q7ll","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-dns/node-resolver-2cmfc","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-operator/iptables-alerter-4ln5h","openshift-image-registry/node-ca-bxllg"] Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.277229 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.277436 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.277710 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.277819 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.278106 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.278312 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.279649 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.280505 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.280592 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.282201 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.282252 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.282269 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.282288 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.282307 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:21Z","lastTransitionTime":"2026-02-24T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.282794 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.282890 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.282953 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.283154 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.283344 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.283484 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.283688 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.284536 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.284647 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.284836 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.284999 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2cmfc" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.287004 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.289108 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.287976 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bxllg" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.292942 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.293231 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.293330 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.293446 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.293832 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.294433 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.294514 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.294630 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.294806 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.294829 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.295063 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.295170 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.295379 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.295573 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.295648 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.296028 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.296269 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.296284 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.296400 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.296517 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.296560 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.296676 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.297021 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.297217 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.302022 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.302589 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.303350 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.303731 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.304012 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.304334 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.304729 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.305245 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.317545 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.349880 4755 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.381421 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.381818 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.382199 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.382367 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.382514 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.382669 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.382979 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.383163 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.383357 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.383507 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.383661 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.383812 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.383968 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.384210 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.384367 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.384519 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.384666 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.384821 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.384968 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.385142 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.385301 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.385453 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.385600 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.385746 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.385884 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.386024 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.386196 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.386360 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.386506 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.386657 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.386806 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.386960 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.387374 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.387453 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.387516 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.387570 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.387619 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.387675 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.387625 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.387719 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.388045 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.388148 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.388207 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.388275 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.386358 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.388424 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.388464 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.388510 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.389130 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:21Z","lastTransitionTime":"2026-02-24T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.388332 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.389830 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.389889 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390057 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390155 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390210 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390265 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390325 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390376 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390411 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390446 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390486 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390558 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390593 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390699 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390740 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390774 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390810 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390936 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390974 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391013 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391164 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391247 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391286 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391321 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391376 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391429 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.382710 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.383912 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391481 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391694 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391748 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391803 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391865 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391913 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391986 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.392033 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.392532 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.392593 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.392642 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.392811 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.392873 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.392929 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.392975 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393025 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393110 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393169 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393222 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393269 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393327 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393381 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393432 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393581 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393637 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393696 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393756 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393816 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393866 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393918 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393962 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.394013 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.394115 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.394221 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.394277 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.394331 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.394395 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.394452 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.394501 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.394551 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395207 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395344 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395412 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395474 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.384018 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395542 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395460 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395602 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395660 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395723 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395780 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395822 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395836 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395980 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.396028 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.396180 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.396226 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.396268 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.396271 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.396303 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.396319 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.396457 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.396767 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.396826 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.396882 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.396968 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.396384 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.397540 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.397611 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.397635 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.397725 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.397778 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.397789 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.397840 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.397899 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.397953 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398006 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398101 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398168 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398224 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398283 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398342 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398399 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398457 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398516 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398571 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398642 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398701 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398756 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398818 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398877 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398931 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399004 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399061 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399150 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399200 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399277 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399339 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399395 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399455 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399577 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399639 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399694 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399847 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399900 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399959 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400010 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400094 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400158 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400215 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400275 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400334 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400396 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400451 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400506 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400561 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400618 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400677 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400743 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400795 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400851 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400914 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400971 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.401032 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.401119 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.401171 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.401230 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398001 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.401285 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.401292 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398244 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.384546 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.384844 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.385150 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.385300 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.385419 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.385641 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.385670 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.386193 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.386524 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.386620 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.386787 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.386970 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.387542 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.387614 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.387643 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.387984 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.388318 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.388346 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.388845 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.389102 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.389353 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.389391 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.389625 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.389837 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.390752 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.391933 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.392199 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393106 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393103 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393159 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393636 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393675 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.393738 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.394295 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.394343 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.394782 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395372 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.395370 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398696 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398685 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.398808 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399214 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399358 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399544 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399636 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.399711 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400031 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400100 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.384043 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400315 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.400903 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.401156 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.401238 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.401272 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.401721 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.401732 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.401341 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.402446 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.402466 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.402513 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.402570 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.402629 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.402694 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.402756 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.402813 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.402867 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.402899 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403004 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxn7g\" (UniqueName: \"kubernetes.io/projected/787109ef-edb9-4334-afc7-6197f57f444f-kube-api-access-kxn7g\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403119 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403190 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6407399-185a-4b27-bd1d-d3816e43a0b5-mcd-auth-proxy-config\") pod \"machine-config-daemon-8q7ll\" (UID: \"f6407399-185a-4b27-bd1d-d3816e43a0b5\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403270 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m82v\" (UniqueName: \"kubernetes.io/projected/82775556-3991-45ab-ac50-7ef81cafeaee-kube-api-access-9m82v\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403339 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403393 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403443 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-var-lib-openvswitch\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403500 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/04c132ba-c396-4f64-a02b-fcdae681ed74-hosts-file\") pod \"node-resolver-2cmfc\" (UID: \"04c132ba-c396-4f64-a02b-fcdae681ed74\") " pod="openshift-dns/node-resolver-2cmfc" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403518 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403558 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403614 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-etc-kubernetes\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403663 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-ovnkube-script-lib\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403494 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403715 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-cni-binary-copy\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403776 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dec1056d-97aa-4dfc-a63d-d729dfdb88f5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-85cjn\" (UID: \"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403830 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dec1056d-97aa-4dfc-a63d-d729dfdb88f5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-85cjn\" (UID: \"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403939 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxz9\" (UniqueName: \"kubernetes.io/projected/04c132ba-c396-4f64-a02b-fcdae681ed74-kube-api-access-mzxz9\") pod \"node-resolver-2cmfc\" (UID: \"04c132ba-c396-4f64-a02b-fcdae681ed74\") " pod="openshift-dns/node-resolver-2cmfc" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404013 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-etc-openvswitch\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404062 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dec1056d-97aa-4dfc-a63d-d729dfdb88f5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-85cjn\" (UID: \"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404130 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404160 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-multus-conf-dir\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404238 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404308 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404357 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/79ca0953-3a40-45a2-9305-02272f036006-cni-binary-copy\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404405 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-var-lib-cni-bin\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404467 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404522 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404573 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/79ca0953-3a40-45a2-9305-02272f036006-multus-daemon-config\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404623 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-systemd-units\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404691 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404746 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6407399-185a-4b27-bd1d-d3816e43a0b5-proxy-tls\") pod \"machine-config-daemon-8q7ll\" (UID: \"f6407399-185a-4b27-bd1d-d3816e43a0b5\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404790 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-os-release\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404837 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-var-lib-kubelet\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404892 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-systemd\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404947 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-node-log\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404999 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405049 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bvck\" (UniqueName: \"kubernetes.io/projected/6ca669cb-3429-4187-bee6-232dbd316c67-kube-api-access-4bvck\") pod \"node-ca-bxllg\" (UID: \"6ca669cb-3429-4187-bee6-232dbd316c67\") " pod="openshift-image-registry/node-ca-bxllg" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405152 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-os-release\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405201 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-cni-bin\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405250 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-ovnkube-config\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405312 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405377 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405431 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-cnibin\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405481 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-openvswitch\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405531 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-ovn\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405581 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6ca669cb-3429-4187-bee6-232dbd316c67-serviceca\") pod \"node-ca-bxllg\" (UID: \"6ca669cb-3429-4187-bee6-232dbd316c67\") " pod="openshift-image-registry/node-ca-bxllg" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405634 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405683 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-run-netns\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405733 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-run-multus-certs\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405792 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-system-cni-dir\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405846 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405899 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-run-netns\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405950 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-log-socket\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406002 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ca669cb-3429-4187-bee6-232dbd316c67-host\") pod \"node-ca-bxllg\" (UID: \"6ca669cb-3429-4187-bee6-232dbd316c67\") " pod="openshift-image-registry/node-ca-bxllg" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406059 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpwjj\" (UniqueName: \"kubernetes.io/projected/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-kube-api-access-zpwjj\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406148 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-cni-netd\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406199 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f6407399-185a-4b27-bd1d-d3816e43a0b5-rootfs\") pod \"machine-config-daemon-8q7ll\" (UID: \"f6407399-185a-4b27-bd1d-d3816e43a0b5\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406246 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-system-cni-dir\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406293 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-cnibin\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406349 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-run-k8s-cni-cncf-io\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406405 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-hostroot\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406455 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/787109ef-edb9-4334-afc7-6197f57f444f-ovn-node-metrics-cert\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406501 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-kubelet\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406548 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-slash\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406615 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406668 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwtmq\" (UniqueName: \"kubernetes.io/projected/f6407399-185a-4b27-bd1d-d3816e43a0b5-kube-api-access-qwtmq\") pod \"machine-config-daemon-8q7ll\" (UID: \"f6407399-185a-4b27-bd1d-d3816e43a0b5\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406718 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-var-lib-cni-multus\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406777 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w29tr\" (UniqueName: \"kubernetes.io/projected/79ca0953-3a40-45a2-9305-02272f036006-kube-api-access-w29tr\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406827 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-env-overrides\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406875 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-multus-cni-dir\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406925 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-multus-socket-dir-parent\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406977 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407037 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407128 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-run-ovn-kubernetes\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407181 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h28np\" (UniqueName: \"kubernetes.io/projected/dec1056d-97aa-4dfc-a63d-d729dfdb88f5-kube-api-access-h28np\") pod \"ovnkube-control-plane-749d76644c-85cjn\" (UID: \"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407334 4755 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407381 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407411 4755 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407442 4755 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407476 4755 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407506 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407535 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407564 4755 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407593 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407625 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407658 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407689 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407720 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407748 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407776 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407805 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407836 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407868 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407899 4755 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407929 4755 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407962 4755 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407995 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408032 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408061 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408208 4755 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408241 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408274 4755 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408311 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408341 4755 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408371 4755 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408402 4755 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408433 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408465 4755 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408496 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408527 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408564 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408595 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408626 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408657 4755 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408689 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408718 4755 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408751 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408781 4755 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408811 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408841 4755 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408870 4755 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408907 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408936 4755 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408966 4755 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408999 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.409030 4755 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.417250 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.403689 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.420415 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404456 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404776 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404858 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404861 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.404951 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405270 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405531 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405557 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405587 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405645 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405781 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.405994 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406111 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406261 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406392 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406504 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406200 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406697 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.406957 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407246 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407439 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.407623 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408136 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408271 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.408990 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.409203 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.409240 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.409371 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.409506 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.409547 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.409870 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.410949 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.411780 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.411890 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:56:21.911834686 +0000 UTC m=+86.368357339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.411879 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.412292 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.412507 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.412611 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.412677 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.413013 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.413397 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.414158 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.414455 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.414811 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.414037 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.415859 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.415918 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.415942 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.416006 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.416122 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.416623 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.416921 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.416998 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.418306 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.420204 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.420659 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.421277 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.421442 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:21.921421252 +0000 UTC m=+86.377943795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.421890 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.421911 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.421941 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.422274 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.422347 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.422550 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.422562 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.422659 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.422746 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.422790 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.422826 4755 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.422835 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.423159 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.423189 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.423273 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.423603 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.423675 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.423963 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.424115 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.424765 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.424977 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.427770 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.427878 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:21.927852738 +0000 UTC m=+86.384375311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.428093 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.428156 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.428384 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.429588 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.429889 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.434234 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.427261 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.436973 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.438099 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.440737 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.440798 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.440830 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.440958 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:21.940914465 +0000 UTC m=+86.397437228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.446755 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.447215 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.448141 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.448388 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.448739 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.448999 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.449060 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.449452 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.449706 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.451692 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.451750 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.451950 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.452546 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.453341 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.454637 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.454670 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.454695 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.454771 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:21.954746076 +0000 UTC m=+86.411268629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.455282 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.456028 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.456331 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.456366 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.456350 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.456780 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.456996 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.457388 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.457487 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.458303 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.458409 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.458682 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.458719 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.458781 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.459221 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.459419 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.459730 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.460146 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.460419 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.461634 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.463673 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.466662 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.469779 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.472829 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.482747 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.482760 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.487972 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.492345 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.492374 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.492384 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.492401 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.492412 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:21Z","lastTransitionTime":"2026-02-24T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.493849 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.501962 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.505717 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.509965 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-run-netns\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510000 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-log-socket\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510024 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpwjj\" (UniqueName: \"kubernetes.io/projected/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-kube-api-access-zpwjj\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510340 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-cni-netd\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510404 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-cni-netd\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510109 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-run-netns\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510364 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ca669cb-3429-4187-bee6-232dbd316c67-host\") pod \"node-ca-bxllg\" (UID: \"6ca669cb-3429-4187-bee6-232dbd316c67\") " pod="openshift-image-registry/node-ca-bxllg" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510460 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-run-k8s-cni-cncf-io\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510156 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-log-socket\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510514 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-run-k8s-cni-cncf-io\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510533 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ca669cb-3429-4187-bee6-232dbd316c67-host\") pod \"node-ca-bxllg\" (UID: \"6ca669cb-3429-4187-bee6-232dbd316c67\") " pod="openshift-image-registry/node-ca-bxllg" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510481 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-hostroot\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510557 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-hostroot\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510668 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/787109ef-edb9-4334-afc7-6197f57f444f-ovn-node-metrics-cert\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510731 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f6407399-185a-4b27-bd1d-d3816e43a0b5-rootfs\") pod \"machine-config-daemon-8q7ll\" (UID: \"f6407399-185a-4b27-bd1d-d3816e43a0b5\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510767 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-system-cni-dir\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510797 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-cnibin\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510823 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwtmq\" (UniqueName: \"kubernetes.io/projected/f6407399-185a-4b27-bd1d-d3816e43a0b5-kube-api-access-qwtmq\") pod \"machine-config-daemon-8q7ll\" (UID: \"f6407399-185a-4b27-bd1d-d3816e43a0b5\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510850 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-var-lib-cni-multus\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510878 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f6407399-185a-4b27-bd1d-d3816e43a0b5-rootfs\") pod \"machine-config-daemon-8q7ll\" (UID: \"f6407399-185a-4b27-bd1d-d3816e43a0b5\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510908 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w29tr\" (UniqueName: \"kubernetes.io/projected/79ca0953-3a40-45a2-9305-02272f036006-kube-api-access-w29tr\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510939 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-kubelet\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510956 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-var-lib-cni-multus\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511000 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-slash\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.510907 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-system-cni-dir\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511046 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-env-overrides\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511048 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-cnibin\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511093 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511131 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-slash\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511182 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-kubelet\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511221 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-run-ovn-kubernetes\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511255 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h28np\" (UniqueName: \"kubernetes.io/projected/dec1056d-97aa-4dfc-a63d-d729dfdb88f5-kube-api-access-h28np\") pod \"ovnkube-control-plane-749d76644c-85cjn\" (UID: \"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511280 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-multus-cni-dir\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511304 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-multus-socket-dir-parent\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511329 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511367 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-var-lib-openvswitch\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511394 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxn7g\" (UniqueName: \"kubernetes.io/projected/787109ef-edb9-4334-afc7-6197f57f444f-kube-api-access-kxn7g\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511422 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6407399-185a-4b27-bd1d-d3816e43a0b5-mcd-auth-proxy-config\") pod \"machine-config-daemon-8q7ll\" (UID: \"f6407399-185a-4b27-bd1d-d3816e43a0b5\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511445 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m82v\" (UniqueName: \"kubernetes.io/projected/82775556-3991-45ab-ac50-7ef81cafeaee-kube-api-access-9m82v\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511469 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-etc-kubernetes\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511491 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-ovnkube-script-lib\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511520 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/04c132ba-c396-4f64-a02b-fcdae681ed74-hosts-file\") pod \"node-resolver-2cmfc\" (UID: \"04c132ba-c396-4f64-a02b-fcdae681ed74\") " pod="openshift-dns/node-resolver-2cmfc" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511575 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-cni-binary-copy\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511602 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dec1056d-97aa-4dfc-a63d-d729dfdb88f5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-85cjn\" (UID: \"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511627 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzxz9\" (UniqueName: \"kubernetes.io/projected/04c132ba-c396-4f64-a02b-fcdae681ed74-kube-api-access-mzxz9\") pod \"node-resolver-2cmfc\" (UID: \"04c132ba-c396-4f64-a02b-fcdae681ed74\") " pod="openshift-dns/node-resolver-2cmfc" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511649 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-etc-openvswitch\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511674 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dec1056d-97aa-4dfc-a63d-d729dfdb88f5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-85cjn\" (UID: \"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511697 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dec1056d-97aa-4dfc-a63d-d729dfdb88f5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-85cjn\" (UID: \"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511728 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511749 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-multus-conf-dir\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511770 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-var-lib-cni-bin\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511800 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-env-overrides\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511806 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/79ca0953-3a40-45a2-9305-02272f036006-cni-binary-copy\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511845 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/79ca0953-3a40-45a2-9305-02272f036006-multus-daemon-config\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511865 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-systemd-units\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511883 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6407399-185a-4b27-bd1d-d3816e43a0b5-proxy-tls\") pod \"machine-config-daemon-8q7ll\" (UID: \"f6407399-185a-4b27-bd1d-d3816e43a0b5\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511902 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-os-release\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511918 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-var-lib-kubelet\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511945 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511972 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bvck\" (UniqueName: \"kubernetes.io/projected/6ca669cb-3429-4187-bee6-232dbd316c67-kube-api-access-4bvck\") pod \"node-ca-bxllg\" (UID: \"6ca669cb-3429-4187-bee6-232dbd316c67\") " pod="openshift-image-registry/node-ca-bxllg" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.511990 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-os-release\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512013 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-systemd\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512030 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-node-log\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512048 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-cni-bin\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512084 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-ovnkube-config\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512119 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-openvswitch\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512136 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-ovn\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512163 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-cnibin\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512179 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-run-multus-certs\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512213 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-system-cni-dir\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512231 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512250 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6ca669cb-3429-4187-bee6-232dbd316c67-serviceca\") pod \"node-ca-bxllg\" (UID: \"6ca669cb-3429-4187-bee6-232dbd316c67\") " pod="openshift-image-registry/node-ca-bxllg" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512267 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512284 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-run-netns\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512339 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-multus-cni-dir\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512380 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-run-netns\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512356 4755 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512484 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512517 4755 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512541 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512554 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-multus-socket-dir-parent\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512623 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512650 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512672 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512695 4755 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512715 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512923 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/79ca0953-3a40-45a2-9305-02272f036006-multus-daemon-config\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512961 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-systemd-units\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512990 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.513361 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-run-ovn-kubernetes\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.513641 4755 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516638 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516668 4755 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516686 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516702 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516719 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516736 4755 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516753 4755 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516769 4755 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516786 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516801 4755 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516816 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516832 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516851 4755 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516869 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516887 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516902 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516918 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516933 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516950 4755 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516967 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516984 4755 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516999 4755 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517016 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517031 4755 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.514557 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-os-release\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.514256 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dec1056d-97aa-4dfc-a63d-d729dfdb88f5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-85cjn\" (UID: \"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516461 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/79ca0953-3a40-45a2-9305-02272f036006-cni-binary-copy\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.514447 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.514597 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.515375 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-cni-binary-copy\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.515415 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-multus-conf-dir\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.515444 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.515517 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.516381 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-ovnkube-script-lib\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.515617 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-systemd\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.515650 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-node-log\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.517252 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs podName:82775556-3991-45ab-ac50-7ef81cafeaee nodeName:}" failed. No retries permitted until 2026-02-24 09:56:22.017231062 +0000 UTC m=+86.473753615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs") pod "network-metrics-daemon-98t22" (UID: "82775556-3991-45ab-ac50-7ef81cafeaee") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.514533 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-etc-kubernetes\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517273 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.515713 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6407399-185a-4b27-bd1d-d3816e43a0b5-mcd-auth-proxy-config\") pod \"machine-config-daemon-8q7ll\" (UID: \"f6407399-185a-4b27-bd1d-d3816e43a0b5\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.514218 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-cnibin\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.515843 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dec1056d-97aa-4dfc-a63d-d729dfdb88f5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-85cjn\" (UID: \"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.512485 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/04c132ba-c396-4f64-a02b-fcdae681ed74-hosts-file\") pod \"node-resolver-2cmfc\" (UID: \"04c132ba-c396-4f64-a02b-fcdae681ed74\") " pod="openshift-dns/node-resolver-2cmfc" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.513903 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-run-multus-certs\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.513949 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-var-lib-openvswitch\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.514184 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-ovn\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.514461 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-var-lib-kubelet\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.515674 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-os-release\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.514509 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-etc-openvswitch\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.515708 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-cni-bin\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517055 4755 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517614 4755 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517640 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517660 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517679 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.515548 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/79ca0953-3a40-45a2-9305-02272f036006-host-var-lib-cni-bin\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517702 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517768 4755 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517785 4755 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517803 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517822 4755 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517840 4755 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517856 4755 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517678 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.517873 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.518052 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.518103 4755 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.518125 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.518184 4755 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.518204 4755 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.518324 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.518346 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.518369 4755 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.518335 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-system-cni-dir\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.518581 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.513952 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-openvswitch\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.519468 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-ovnkube-config\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.520796 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/787109ef-edb9-4334-afc7-6197f57f444f-ovn-node-metrics-cert\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.524101 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6ca669cb-3429-4187-bee6-232dbd316c67-serviceca\") pod \"node-ca-bxllg\" (UID: \"6ca669cb-3429-4187-bee6-232dbd316c67\") " pod="openshift-image-registry/node-ca-bxllg" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.518387 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.526297 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.526440 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.526914 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.527195 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.527311 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.527449 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.527558 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.527739 4755 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.527879 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.527996 4755 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.528182 4755 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.528316 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.528413 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.528493 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.528574 4755 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.528654 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.529004 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.529142 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.529308 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.529449 4755 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.529620 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.529801 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.529981 4755 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.528319 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dec1056d-97aa-4dfc-a63d-d729dfdb88f5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-85cjn\" (UID: \"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.529799 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.529624 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h28np\" (UniqueName: \"kubernetes.io/projected/dec1056d-97aa-4dfc-a63d-d729dfdb88f5-kube-api-access-h28np\") pod \"ovnkube-control-plane-749d76644c-85cjn\" (UID: \"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.529676 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bvck\" (UniqueName: \"kubernetes.io/projected/6ca669cb-3429-4187-bee6-232dbd316c67-kube-api-access-4bvck\") pod \"node-ca-bxllg\" (UID: \"6ca669cb-3429-4187-bee6-232dbd316c67\") " pod="openshift-image-registry/node-ca-bxllg" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.530323 4755 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.531456 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.531650 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.531821 4755 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.531995 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.532210 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.532312 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.532405 4755 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.532495 4755 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.532586 4755 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.532692 4755 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.532801 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.534633 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6407399-185a-4b27-bd1d-d3816e43a0b5-proxy-tls\") pod \"machine-config-daemon-8q7ll\" (UID: \"f6407399-185a-4b27-bd1d-d3816e43a0b5\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.534719 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.534913 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.535003 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.535107 4755 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.535203 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.535313 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.535401 4755 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.535477 4755 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.535564 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.535662 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.535746 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.535819 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.535917 4755 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.536017 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.536122 4755 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.536231 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.536338 4755 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.536427 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.536511 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.536612 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.536698 4755 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.536789 4755 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.536882 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.535851 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxn7g\" (UniqueName: \"kubernetes.io/projected/787109ef-edb9-4334-afc7-6197f57f444f-kube-api-access-kxn7g\") pod \"ovnkube-node-fljft\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.536960 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537030 4755 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537050 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537089 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.535946 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwtmq\" (UniqueName: \"kubernetes.io/projected/f6407399-185a-4b27-bd1d-d3816e43a0b5-kube-api-access-qwtmq\") pod \"machine-config-daemon-8q7ll\" (UID: \"f6407399-185a-4b27-bd1d-d3816e43a0b5\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537106 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537207 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537237 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537259 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537276 4755 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537291 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537304 4755 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537318 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537331 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537343 4755 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537359 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537365 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m82v\" (UniqueName: \"kubernetes.io/projected/82775556-3991-45ab-ac50-7ef81cafeaee-kube-api-access-9m82v\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537372 4755 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537428 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537441 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537452 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537466 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537476 4755 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537489 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537499 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537511 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537525 4755 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537536 4755 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537546 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.537558 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.541909 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzxz9\" (UniqueName: \"kubernetes.io/projected/04c132ba-c396-4f64-a02b-fcdae681ed74-kube-api-access-mzxz9\") pod \"node-resolver-2cmfc\" (UID: \"04c132ba-c396-4f64-a02b-fcdae681ed74\") " pod="openshift-dns/node-resolver-2cmfc" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.542843 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w29tr\" (UniqueName: \"kubernetes.io/projected/79ca0953-3a40-45a2-9305-02272f036006-kube-api-access-w29tr\") pod \"multus-dwm6v\" (UID: \"79ca0953-3a40-45a2-9305-02272f036006\") " pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.546638 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpwjj\" (UniqueName: \"kubernetes.io/projected/f842f5c8-ff09-48b2-9805-ad9de28e2ea7-kube-api-access-zpwjj\") pod \"multus-additional-cni-plugins-8t77m\" (UID: \"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\") " pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.547394 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.589868 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.596503 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.596672 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.596770 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.596872 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.596964 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:21Z","lastTransitionTime":"2026-02-24T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.605226 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.612508 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.616663 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: W0224 09:56:21.618438 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-00d7b547c0f708a2eed5e861b77f4a633e30ba6729f90b9b7a754d3d8338efb5 WatchSource:0}: Error finding container 00d7b547c0f708a2eed5e861b77f4a633e30ba6729f90b9b7a754d3d8338efb5: Status 404 returned error can't find the container with id 00d7b547c0f708a2eed5e861b77f4a633e30ba6729f90b9b7a754d3d8338efb5 Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.619950 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.620701 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:21 crc kubenswrapper[4755]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 24 09:56:21 crc kubenswrapper[4755]: set -o allexport Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: source /etc/kubernetes/apiserver-url.env Feb 24 09:56:21 crc kubenswrapper[4755]: else Feb 24 09:56:21 crc kubenswrapper[4755]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 24 09:56:21 crc kubenswrapper[4755]: exit 1 Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 24 09:56:21 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:21 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.622152 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 24 09:56:21 crc kubenswrapper[4755]: W0224 09:56:21.622788 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-0d216b56cd618edc6aaa639fcdaee7c7e42d375469564e183d23894293fcad32 WatchSource:0}: Error finding container 0d216b56cd618edc6aaa639fcdaee7c7e42d375469564e183d23894293fcad32: Status 404 returned error can't find the container with id 0d216b56cd618edc6aaa639fcdaee7c7e42d375469564e183d23894293fcad32 Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.624641 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.625858 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:21 crc kubenswrapper[4755]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ -f "/env/_master" ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: set -o allexport Feb 24 09:56:21 crc kubenswrapper[4755]: source "/env/_master" Feb 24 09:56:21 crc kubenswrapper[4755]: set +o allexport Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 24 09:56:21 crc kubenswrapper[4755]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 24 09:56:21 crc kubenswrapper[4755]: ho_enable="--enable-hybrid-overlay" Feb 24 09:56:21 crc kubenswrapper[4755]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 24 09:56:21 crc kubenswrapper[4755]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 24 09:56:21 crc kubenswrapper[4755]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 24 09:56:21 crc kubenswrapper[4755]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 09:56:21 crc kubenswrapper[4755]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 24 09:56:21 crc kubenswrapper[4755]: --webhook-host=127.0.0.1 \ Feb 24 09:56:21 crc kubenswrapper[4755]: --webhook-port=9743 \ Feb 24 09:56:21 crc kubenswrapper[4755]: ${ho_enable} \ Feb 24 09:56:21 crc kubenswrapper[4755]: --enable-interconnect \ Feb 24 09:56:21 crc kubenswrapper[4755]: --disable-approver \ Feb 24 09:56:21 crc kubenswrapper[4755]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 24 09:56:21 crc kubenswrapper[4755]: --wait-for-kubernetes-api=200s \ Feb 24 09:56:21 crc kubenswrapper[4755]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 24 09:56:21 crc kubenswrapper[4755]: --loglevel="${LOGLEVEL}" Feb 24 09:56:21 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:21 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.629296 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:21 crc kubenswrapper[4755]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ -f "/env/_master" ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: set -o allexport Feb 24 09:56:21 crc kubenswrapper[4755]: source "/env/_master" Feb 24 09:56:21 crc kubenswrapper[4755]: set +o allexport Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 24 09:56:21 crc kubenswrapper[4755]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 09:56:21 crc kubenswrapper[4755]: --disable-webhook \ Feb 24 09:56:21 crc kubenswrapper[4755]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 24 09:56:21 crc kubenswrapper[4755]: --loglevel="${LOGLEVEL}" Feb 24 09:56:21 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:21 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.630358 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 24 09:56:21 crc kubenswrapper[4755]: W0224 09:56:21.630381 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-61a0566f94b1ab7d917c97374eeb19d3d7cd5fdd1f780d65aa4ce011a8968d75 WatchSource:0}: Error finding container 61a0566f94b1ab7d917c97374eeb19d3d7cd5fdd1f780d65aa4ce011a8968d75: Status 404 returned error can't find the container with id 61a0566f94b1ab7d917c97374eeb19d3d7cd5fdd1f780d65aa4ce011a8968d75 Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.632721 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.633446 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.634606 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.679401 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dwm6v" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.685610 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"00d7b547c0f708a2eed5e861b77f4a633e30ba6729f90b9b7a754d3d8338efb5"} Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.686508 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"61a0566f94b1ab7d917c97374eeb19d3d7cd5fdd1f780d65aa4ce011a8968d75"} Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.688230 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.688687 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:21 crc kubenswrapper[4755]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 24 09:56:21 crc kubenswrapper[4755]: set -o allexport Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: source /etc/kubernetes/apiserver-url.env Feb 24 09:56:21 crc kubenswrapper[4755]: else Feb 24 09:56:21 crc kubenswrapper[4755]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 24 09:56:21 crc kubenswrapper[4755]: exit 1 Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 24 09:56:21 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:21 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.689337 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.689459 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0d216b56cd618edc6aaa639fcdaee7c7e42d375469564e183d23894293fcad32"} Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.690469 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.691321 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:21 crc kubenswrapper[4755]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ -f "/env/_master" ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: set -o allexport Feb 24 09:56:21 crc kubenswrapper[4755]: source "/env/_master" Feb 24 09:56:21 crc kubenswrapper[4755]: set +o allexport Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 24 09:56:21 crc kubenswrapper[4755]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 24 09:56:21 crc kubenswrapper[4755]: ho_enable="--enable-hybrid-overlay" Feb 24 09:56:21 crc kubenswrapper[4755]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 24 09:56:21 crc kubenswrapper[4755]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 24 09:56:21 crc kubenswrapper[4755]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 24 09:56:21 crc kubenswrapper[4755]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 09:56:21 crc kubenswrapper[4755]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 24 09:56:21 crc kubenswrapper[4755]: --webhook-host=127.0.0.1 \ Feb 24 09:56:21 crc kubenswrapper[4755]: --webhook-port=9743 \ Feb 24 09:56:21 crc kubenswrapper[4755]: ${ho_enable} \ Feb 24 09:56:21 crc kubenswrapper[4755]: --enable-interconnect \ Feb 24 09:56:21 crc kubenswrapper[4755]: --disable-approver \ Feb 24 09:56:21 crc kubenswrapper[4755]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 24 09:56:21 crc kubenswrapper[4755]: --wait-for-kubernetes-api=200s \ Feb 24 09:56:21 crc kubenswrapper[4755]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 24 09:56:21 crc kubenswrapper[4755]: --loglevel="${LOGLEVEL}" Feb 24 09:56:21 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:21 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: W0224 09:56:21.691966 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79ca0953_3a40_45a2_9305_02272f036006.slice/crio-02d8375d2e62a07de8fdd30bc9280bf1797de7b88f9d79a4793a99259a61f806 WatchSource:0}: Error finding container 02d8375d2e62a07de8fdd30bc9280bf1797de7b88f9d79a4793a99259a61f806: Status 404 returned error can't find the container with id 02d8375d2e62a07de8fdd30bc9280bf1797de7b88f9d79a4793a99259a61f806 Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.693997 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.695308 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:21 crc kubenswrapper[4755]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ -f "/env/_master" ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: set -o allexport Feb 24 09:56:21 crc kubenswrapper[4755]: source "/env/_master" Feb 24 09:56:21 crc kubenswrapper[4755]: set +o allexport Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 24 09:56:21 crc kubenswrapper[4755]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 09:56:21 crc kubenswrapper[4755]: --disable-webhook \ Feb 24 09:56:21 crc kubenswrapper[4755]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 24 09:56:21 crc kubenswrapper[4755]: --loglevel="${LOGLEVEL}" Feb 24 09:56:21 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:21 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.696805 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:21 crc kubenswrapper[4755]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 24 09:56:21 crc kubenswrapper[4755]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 24 09:56:21 crc kubenswrapper[4755]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w29tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-dwm6v_openshift-multus(79ca0953-3a40-45a2-9305-02272f036006): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:21 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.697906 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.697964 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-dwm6v" podUID="79ca0953-3a40-45a2-9305-02272f036006" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.699273 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8t77m" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.700418 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.700516 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.700539 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.700575 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.700596 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:21Z","lastTransitionTime":"2026-02-24T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.703843 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: W0224 09:56:21.711998 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6407399_185a_4b27_bd1d_d3816e43a0b5.slice/crio-7f45a7873c12a758e324c76ee4dc8d2e3b6e4305508ccef9b2a7fccc17b71060 WatchSource:0}: Error finding container 7f45a7873c12a758e324c76ee4dc8d2e3b6e4305508ccef9b2a7fccc17b71060: Status 404 returned error can't find the container with id 7f45a7873c12a758e324c76ee4dc8d2e3b6e4305508ccef9b2a7fccc17b71060 Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.716013 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwtmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: W0224 09:56:21.717474 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf842f5c8_ff09_48b2_9805_ad9de28e2ea7.slice/crio-036410a344001cca9cea97ac17f46d1eba83f5139cfd8168fead212f442fa3c9 WatchSource:0}: Error finding container 036410a344001cca9cea97ac17f46d1eba83f5139cfd8168fead212f442fa3c9: Status 404 returned error can't find the container with id 036410a344001cca9cea97ac17f46d1eba83f5139cfd8168fead212f442fa3c9 Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.718904 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwtmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.720270 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.720199 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.722560 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpwjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-8t77m_openshift-multus(f842f5c8-ff09-48b2-9805-ad9de28e2ea7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.723516 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2cmfc" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.723749 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-8t77m" podUID="f842f5c8-ff09-48b2-9805-ad9de28e2ea7" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.735917 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: W0224 09:56:21.739128 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04c132ba_c396_4f64_a02b_fcdae681ed74.slice/crio-45bb2d248f0c0c04b11dd3ebb442c42a62943782ecc801154ca72433e97045ba WatchSource:0}: Error finding container 45bb2d248f0c0c04b11dd3ebb442c42a62943782ecc801154ca72433e97045ba: Status 404 returned error can't find the container with id 45bb2d248f0c0c04b11dd3ebb442c42a62943782ecc801154ca72433e97045ba Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.743279 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:21 crc kubenswrapper[4755]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 24 09:56:21 crc kubenswrapper[4755]: set -uo pipefail Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 24 09:56:21 crc kubenswrapper[4755]: HOSTS_FILE="/etc/hosts" Feb 24 09:56:21 crc kubenswrapper[4755]: TEMP_FILE="/etc/hosts.tmp" Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: # Make a temporary file with the old hosts file's attributes. Feb 24 09:56:21 crc kubenswrapper[4755]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 24 09:56:21 crc kubenswrapper[4755]: echo "Failed to preserve hosts file. Exiting." Feb 24 09:56:21 crc kubenswrapper[4755]: exit 1 Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: while true; do Feb 24 09:56:21 crc kubenswrapper[4755]: declare -A svc_ips Feb 24 09:56:21 crc kubenswrapper[4755]: for svc in "${services[@]}"; do Feb 24 09:56:21 crc kubenswrapper[4755]: # Fetch service IP from cluster dns if present. We make several tries Feb 24 09:56:21 crc kubenswrapper[4755]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 24 09:56:21 crc kubenswrapper[4755]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 24 09:56:21 crc kubenswrapper[4755]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 24 09:56:21 crc kubenswrapper[4755]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:56:21 crc kubenswrapper[4755]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:56:21 crc kubenswrapper[4755]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:56:21 crc kubenswrapper[4755]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 24 09:56:21 crc kubenswrapper[4755]: for i in ${!cmds[*]} Feb 24 09:56:21 crc kubenswrapper[4755]: do Feb 24 09:56:21 crc kubenswrapper[4755]: ips=($(eval "${cmds[i]}")) Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: svc_ips["${svc}"]="${ips[@]}" Feb 24 09:56:21 crc kubenswrapper[4755]: break Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: done Feb 24 09:56:21 crc kubenswrapper[4755]: done Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: # Update /etc/hosts only if we get valid service IPs Feb 24 09:56:21 crc kubenswrapper[4755]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 24 09:56:21 crc kubenswrapper[4755]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 24 09:56:21 crc kubenswrapper[4755]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 24 09:56:21 crc kubenswrapper[4755]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 24 09:56:21 crc kubenswrapper[4755]: sleep 60 & wait Feb 24 09:56:21 crc kubenswrapper[4755]: continue Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: # Append resolver entries for services Feb 24 09:56:21 crc kubenswrapper[4755]: rc=0 Feb 24 09:56:21 crc kubenswrapper[4755]: for svc in "${!svc_ips[@]}"; do Feb 24 09:56:21 crc kubenswrapper[4755]: for ip in ${svc_ips[${svc}]}; do Feb 24 09:56:21 crc kubenswrapper[4755]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 24 09:56:21 crc kubenswrapper[4755]: done Feb 24 09:56:21 crc kubenswrapper[4755]: done Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ $rc -ne 0 ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: sleep 60 & wait Feb 24 09:56:21 crc kubenswrapper[4755]: continue Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 24 09:56:21 crc kubenswrapper[4755]: # Replace /etc/hosts with our modified version if needed Feb 24 09:56:21 crc kubenswrapper[4755]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 24 09:56:21 crc kubenswrapper[4755]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: sleep 60 & wait Feb 24 09:56:21 crc kubenswrapper[4755]: unset svc_ips Feb 24 09:56:21 crc kubenswrapper[4755]: done Feb 24 09:56:21 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzxz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-2cmfc_openshift-dns(04c132ba-c396-4f64-a02b-fcdae681ed74): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:21 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.744500 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-2cmfc" podUID="04c132ba-c396-4f64-a02b-fcdae681ed74" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.746881 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.750892 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bxllg" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.759380 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.763193 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.775182 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.776921 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.779432 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:21 crc kubenswrapper[4755]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 24 09:56:21 crc kubenswrapper[4755]: while [ true ]; Feb 24 09:56:21 crc kubenswrapper[4755]: do Feb 24 09:56:21 crc kubenswrapper[4755]: for f in $(ls /tmp/serviceca); do Feb 24 09:56:21 crc kubenswrapper[4755]: echo $f Feb 24 09:56:21 crc kubenswrapper[4755]: ca_file_path="/tmp/serviceca/${f}" Feb 24 09:56:21 crc kubenswrapper[4755]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 24 09:56:21 crc kubenswrapper[4755]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 24 09:56:21 crc kubenswrapper[4755]: if [ -e "${reg_dir_path}" ]; then Feb 24 09:56:21 crc kubenswrapper[4755]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 24 09:56:21 crc kubenswrapper[4755]: else Feb 24 09:56:21 crc kubenswrapper[4755]: mkdir $reg_dir_path Feb 24 09:56:21 crc kubenswrapper[4755]: cp $ca_file_path $reg_dir_path/ca.crt Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: done Feb 24 09:56:21 crc kubenswrapper[4755]: for d in $(ls /etc/docker/certs.d); do Feb 24 09:56:21 crc kubenswrapper[4755]: echo $d Feb 24 09:56:21 crc kubenswrapper[4755]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 24 09:56:21 crc kubenswrapper[4755]: reg_conf_path="/tmp/serviceca/${dp}" Feb 24 09:56:21 crc kubenswrapper[4755]: if [ ! -e "${reg_conf_path}" ]; then Feb 24 09:56:21 crc kubenswrapper[4755]: rm -rf /etc/docker/certs.d/$d Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: done Feb 24 09:56:21 crc kubenswrapper[4755]: sleep 60 & wait ${!} Feb 24 09:56:21 crc kubenswrapper[4755]: done Feb 24 09:56:21 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bvck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-bxllg_openshift-image-registry(6ca669cb-3429-4187-bee6-232dbd316c67): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:21 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: W0224 09:56:21.780500 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod787109ef_edb9_4334_afc7_6197f57f444f.slice/crio-07b43dba684f13da28a69ca000291cca1139ad30738590ad3cda0dad0590c7a5 WatchSource:0}: Error finding container 07b43dba684f13da28a69ca000291cca1139ad30738590ad3cda0dad0590c7a5: Status 404 returned error can't find the container with id 07b43dba684f13da28a69ca000291cca1139ad30738590ad3cda0dad0590c7a5 Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.780629 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-bxllg" podUID="6ca669cb-3429-4187-bee6-232dbd316c67" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.787335 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:21 crc kubenswrapper[4755]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 24 09:56:21 crc kubenswrapper[4755]: apiVersion: v1 Feb 24 09:56:21 crc kubenswrapper[4755]: clusters: Feb 24 09:56:21 crc kubenswrapper[4755]: - cluster: Feb 24 09:56:21 crc kubenswrapper[4755]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 24 09:56:21 crc kubenswrapper[4755]: server: https://api-int.crc.testing:6443 Feb 24 09:56:21 crc kubenswrapper[4755]: name: default-cluster Feb 24 09:56:21 crc kubenswrapper[4755]: contexts: Feb 24 09:56:21 crc kubenswrapper[4755]: - context: Feb 24 09:56:21 crc kubenswrapper[4755]: cluster: default-cluster Feb 24 09:56:21 crc kubenswrapper[4755]: namespace: default Feb 24 09:56:21 crc kubenswrapper[4755]: user: default-auth Feb 24 09:56:21 crc kubenswrapper[4755]: name: default-context Feb 24 09:56:21 crc kubenswrapper[4755]: current-context: default-context Feb 24 09:56:21 crc kubenswrapper[4755]: kind: Config Feb 24 09:56:21 crc kubenswrapper[4755]: preferences: {} Feb 24 09:56:21 crc kubenswrapper[4755]: users: Feb 24 09:56:21 crc kubenswrapper[4755]: - name: default-auth Feb 24 09:56:21 crc kubenswrapper[4755]: user: Feb 24 09:56:21 crc kubenswrapper[4755]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 24 09:56:21 crc kubenswrapper[4755]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 24 09:56:21 crc kubenswrapper[4755]: EOF Feb 24 09:56:21 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kxn7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:21 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.787328 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.788483 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" Feb 24 09:56:21 crc kubenswrapper[4755]: W0224 09:56:21.794349 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddec1056d_97aa_4dfc_a63d_d729dfdb88f5.slice/crio-add38a48c32ee74229d57b554047be60437fa86a2d8c8e2c6a6ba0356a0e6b9e WatchSource:0}: Error finding container add38a48c32ee74229d57b554047be60437fa86a2d8c8e2c6a6ba0356a0e6b9e: Status 404 returned error can't find the container with id add38a48c32ee74229d57b554047be60437fa86a2d8c8e2c6a6ba0356a0e6b9e Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.797049 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:21 crc kubenswrapper[4755]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 24 09:56:21 crc kubenswrapper[4755]: set -euo pipefail Feb 24 09:56:21 crc kubenswrapper[4755]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 24 09:56:21 crc kubenswrapper[4755]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 24 09:56:21 crc kubenswrapper[4755]: # As the secret mount is optional we must wait for the files to be present. Feb 24 09:56:21 crc kubenswrapper[4755]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 24 09:56:21 crc kubenswrapper[4755]: TS=$(date +%s) Feb 24 09:56:21 crc kubenswrapper[4755]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 24 09:56:21 crc kubenswrapper[4755]: HAS_LOGGED_INFO=0 Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: log_missing_certs(){ Feb 24 09:56:21 crc kubenswrapper[4755]: CUR_TS=$(date +%s) Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 24 09:56:21 crc kubenswrapper[4755]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 24 09:56:21 crc kubenswrapper[4755]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 24 09:56:21 crc kubenswrapper[4755]: HAS_LOGGED_INFO=1 Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: } Feb 24 09:56:21 crc kubenswrapper[4755]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 24 09:56:21 crc kubenswrapper[4755]: log_missing_certs Feb 24 09:56:21 crc kubenswrapper[4755]: sleep 5 Feb 24 09:56:21 crc kubenswrapper[4755]: done Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 24 09:56:21 crc kubenswrapper[4755]: exec /usr/bin/kube-rbac-proxy \ Feb 24 09:56:21 crc kubenswrapper[4755]: --logtostderr \ Feb 24 09:56:21 crc kubenswrapper[4755]: --secure-listen-address=:9108 \ Feb 24 09:56:21 crc kubenswrapper[4755]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 24 09:56:21 crc kubenswrapper[4755]: --upstream=http://127.0.0.1:29108/ \ Feb 24 09:56:21 crc kubenswrapper[4755]: --tls-private-key-file=${TLS_PK} \ Feb 24 09:56:21 crc kubenswrapper[4755]: --tls-cert-file=${TLS_CERT} Feb 24 09:56:21 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h28np,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-85cjn_openshift-ovn-kubernetes(dec1056d-97aa-4dfc-a63d-d729dfdb88f5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:21 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.797312 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.799617 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:21 crc kubenswrapper[4755]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ -f "/env/_master" ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: set -o allexport Feb 24 09:56:21 crc kubenswrapper[4755]: source "/env/_master" Feb 24 09:56:21 crc kubenswrapper[4755]: set +o allexport Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: ovn_v4_join_subnet_opt= Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ "" != "" ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: ovn_v6_join_subnet_opt= Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ "" != "" ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: ovn_v4_transit_switch_subnet_opt= Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ "" != "" ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: ovn_v6_transit_switch_subnet_opt= Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ "" != "" ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: dns_name_resolver_enabled_flag= Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ "false" == "true" ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: persistent_ips_enabled_flag= Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ "true" == "true" ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: # This is needed so that converting clusters from GA to TP Feb 24 09:56:21 crc kubenswrapper[4755]: # will rollout control plane pods as well Feb 24 09:56:21 crc kubenswrapper[4755]: network_segmentation_enabled_flag= Feb 24 09:56:21 crc kubenswrapper[4755]: multi_network_enabled_flag= Feb 24 09:56:21 crc kubenswrapper[4755]: if [[ "true" == "true" ]]; then Feb 24 09:56:21 crc kubenswrapper[4755]: multi_network_enabled_flag="--enable-multi-network" Feb 24 09:56:21 crc kubenswrapper[4755]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 24 09:56:21 crc kubenswrapper[4755]: fi Feb 24 09:56:21 crc kubenswrapper[4755]: Feb 24 09:56:21 crc kubenswrapper[4755]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 24 09:56:21 crc kubenswrapper[4755]: exec /usr/bin/ovnkube \ Feb 24 09:56:21 crc kubenswrapper[4755]: --enable-interconnect \ Feb 24 09:56:21 crc kubenswrapper[4755]: --init-cluster-manager "${K8S_NODE}" \ Feb 24 09:56:21 crc kubenswrapper[4755]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 24 09:56:21 crc kubenswrapper[4755]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 24 09:56:21 crc kubenswrapper[4755]: --metrics-bind-address "127.0.0.1:29108" \ Feb 24 09:56:21 crc kubenswrapper[4755]: --metrics-enable-pprof \ Feb 24 09:56:21 crc kubenswrapper[4755]: --metrics-enable-config-duration \ Feb 24 09:56:21 crc kubenswrapper[4755]: ${ovn_v4_join_subnet_opt} \ Feb 24 09:56:21 crc kubenswrapper[4755]: ${ovn_v6_join_subnet_opt} \ Feb 24 09:56:21 crc kubenswrapper[4755]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 24 09:56:21 crc kubenswrapper[4755]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 24 09:56:21 crc kubenswrapper[4755]: ${dns_name_resolver_enabled_flag} \ Feb 24 09:56:21 crc kubenswrapper[4755]: ${persistent_ips_enabled_flag} \ Feb 24 09:56:21 crc kubenswrapper[4755]: ${multi_network_enabled_flag} \ Feb 24 09:56:21 crc kubenswrapper[4755]: ${network_segmentation_enabled_flag} Feb 24 09:56:21 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h28np,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-85cjn_openshift-ovn-kubernetes(dec1056d-97aa-4dfc-a63d-d729dfdb88f5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:21 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.801406 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" podUID="dec1056d-97aa-4dfc-a63d-d729dfdb88f5" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.803672 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.803710 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.803722 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.803760 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.803787 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:21Z","lastTransitionTime":"2026-02-24T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.811599 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.825926 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.837131 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.848752 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.858799 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.878011 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.889093 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.901516 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.907417 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.907496 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.907522 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.907555 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.907579 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:21Z","lastTransitionTime":"2026-02-24T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.912315 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.924104 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.933880 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.940731 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.940942 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.941038 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:56:22.941004485 +0000 UTC m=+87.397527058 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.941136 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.941340 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.941401 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.941431 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.941469 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.941570 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.941642 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.942438 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:22.94177662 +0000 UTC m=+87.398299213 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.942535 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:22.942510023 +0000 UTC m=+87.399032616 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: E0224 09:56:21.942647 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:22.942624837 +0000 UTC m=+87.399147550 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.945298 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.964054 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.975634 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:21 crc kubenswrapper[4755]: I0224 09:56:21.993208 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.007630 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.010488 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.010542 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.010562 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.010591 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.010610 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:22Z","lastTransitionTime":"2026-02-24T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.025959 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.042794 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.042888 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.043092 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.043124 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.043145 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.043225 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:23.043187988 +0000 UTC m=+87.499710581 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.043470 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.043682 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs podName:82775556-3991-45ab-ac50-7ef81cafeaee nodeName:}" failed. No retries permitted until 2026-02-24 09:56:23.043645563 +0000 UTC m=+87.500168206 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs") pod "network-metrics-daemon-98t22" (UID: "82775556-3991-45ab-ac50-7ef81cafeaee") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.046133 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.058868 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.069303 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.114496 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.114818 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.114884 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.114921 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.114947 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:22Z","lastTransitionTime":"2026-02-24T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.218211 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.218268 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.218292 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.218313 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.218330 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:22Z","lastTransitionTime":"2026-02-24T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.320662 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.320866 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.320890 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.320915 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.320932 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:22Z","lastTransitionTime":"2026-02-24T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.325009 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.327059 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.330185 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.331740 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.333789 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.334838 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.336054 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.338364 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.340151 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.342582 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.343932 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.346337 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.347404 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.348520 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.350659 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.351784 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.353871 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.354811 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.356038 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.358347 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.359422 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.361600 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.362580 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.364719 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.365841 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.367475 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.370448 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.371559 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.373697 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.374737 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.377284 4755 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.377557 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.382514 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.384411 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.385163 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.387687 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.388479 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.389502 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.390198 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.391325 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.391908 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.393030 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.393715 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.394789 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.395474 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.396573 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.397410 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.398817 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.399474 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.400409 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.400876 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.401918 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.402500 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.402979 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.424872 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.424914 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.424923 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.424938 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.424948 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:22Z","lastTransitionTime":"2026-02-24T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.527412 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.527480 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.527834 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.528180 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.528670 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:22Z","lastTransitionTime":"2026-02-24T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.630891 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.630962 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.630981 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.631008 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.631027 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:22Z","lastTransitionTime":"2026-02-24T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.693099 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dwm6v" event={"ID":"79ca0953-3a40-45a2-9305-02272f036006","Type":"ContainerStarted","Data":"02d8375d2e62a07de8fdd30bc9280bf1797de7b88f9d79a4793a99259a61f806"} Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.695631 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" event={"ID":"dec1056d-97aa-4dfc-a63d-d729dfdb88f5","Type":"ContainerStarted","Data":"add38a48c32ee74229d57b554047be60437fa86a2d8c8e2c6a6ba0356a0e6b9e"} Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.696875 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:22 crc kubenswrapper[4755]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 24 09:56:22 crc kubenswrapper[4755]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 24 09:56:22 crc kubenswrapper[4755]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w29tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-dwm6v_openshift-multus(79ca0953-3a40-45a2-9305-02272f036006): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:22 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.697316 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:22 crc kubenswrapper[4755]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 24 09:56:22 crc kubenswrapper[4755]: set -euo pipefail Feb 24 09:56:22 crc kubenswrapper[4755]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 24 09:56:22 crc kubenswrapper[4755]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 24 09:56:22 crc kubenswrapper[4755]: # As the secret mount is optional we must wait for the files to be present. Feb 24 09:56:22 crc kubenswrapper[4755]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 24 09:56:22 crc kubenswrapper[4755]: TS=$(date +%s) Feb 24 09:56:22 crc kubenswrapper[4755]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 24 09:56:22 crc kubenswrapper[4755]: HAS_LOGGED_INFO=0 Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: log_missing_certs(){ Feb 24 09:56:22 crc kubenswrapper[4755]: CUR_TS=$(date +%s) Feb 24 09:56:22 crc kubenswrapper[4755]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 24 09:56:22 crc kubenswrapper[4755]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 24 09:56:22 crc kubenswrapper[4755]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 24 09:56:22 crc kubenswrapper[4755]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 24 09:56:22 crc kubenswrapper[4755]: HAS_LOGGED_INFO=1 Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: } Feb 24 09:56:22 crc kubenswrapper[4755]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 24 09:56:22 crc kubenswrapper[4755]: log_missing_certs Feb 24 09:56:22 crc kubenswrapper[4755]: sleep 5 Feb 24 09:56:22 crc kubenswrapper[4755]: done Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 24 09:56:22 crc kubenswrapper[4755]: exec /usr/bin/kube-rbac-proxy \ Feb 24 09:56:22 crc kubenswrapper[4755]: --logtostderr \ Feb 24 09:56:22 crc kubenswrapper[4755]: --secure-listen-address=:9108 \ Feb 24 09:56:22 crc kubenswrapper[4755]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 24 09:56:22 crc kubenswrapper[4755]: --upstream=http://127.0.0.1:29108/ \ Feb 24 09:56:22 crc kubenswrapper[4755]: --tls-private-key-file=${TLS_PK} \ Feb 24 09:56:22 crc kubenswrapper[4755]: --tls-cert-file=${TLS_CERT} Feb 24 09:56:22 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h28np,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-85cjn_openshift-ovn-kubernetes(dec1056d-97aa-4dfc-a63d-d729dfdb88f5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:22 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.697616 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerStarted","Data":"07b43dba684f13da28a69ca000291cca1139ad30738590ad3cda0dad0590c7a5"} Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.698091 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-dwm6v" podUID="79ca0953-3a40-45a2-9305-02272f036006" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.699273 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2cmfc" event={"ID":"04c132ba-c396-4f64-a02b-fcdae681ed74","Type":"ContainerStarted","Data":"45bb2d248f0c0c04b11dd3ebb442c42a62943782ecc801154ca72433e97045ba"} Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.700302 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:22 crc kubenswrapper[4755]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 09:56:22 crc kubenswrapper[4755]: if [[ -f "/env/_master" ]]; then Feb 24 09:56:22 crc kubenswrapper[4755]: set -o allexport Feb 24 09:56:22 crc kubenswrapper[4755]: source "/env/_master" Feb 24 09:56:22 crc kubenswrapper[4755]: set +o allexport Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: ovn_v4_join_subnet_opt= Feb 24 09:56:22 crc kubenswrapper[4755]: if [[ "" != "" ]]; then Feb 24 09:56:22 crc kubenswrapper[4755]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: ovn_v6_join_subnet_opt= Feb 24 09:56:22 crc kubenswrapper[4755]: if [[ "" != "" ]]; then Feb 24 09:56:22 crc kubenswrapper[4755]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: ovn_v4_transit_switch_subnet_opt= Feb 24 09:56:22 crc kubenswrapper[4755]: if [[ "" != "" ]]; then Feb 24 09:56:22 crc kubenswrapper[4755]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: ovn_v6_transit_switch_subnet_opt= Feb 24 09:56:22 crc kubenswrapper[4755]: if [[ "" != "" ]]; then Feb 24 09:56:22 crc kubenswrapper[4755]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: dns_name_resolver_enabled_flag= Feb 24 09:56:22 crc kubenswrapper[4755]: if [[ "false" == "true" ]]; then Feb 24 09:56:22 crc kubenswrapper[4755]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: persistent_ips_enabled_flag= Feb 24 09:56:22 crc kubenswrapper[4755]: if [[ "true" == "true" ]]; then Feb 24 09:56:22 crc kubenswrapper[4755]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: # This is needed so that converting clusters from GA to TP Feb 24 09:56:22 crc kubenswrapper[4755]: # will rollout control plane pods as well Feb 24 09:56:22 crc kubenswrapper[4755]: network_segmentation_enabled_flag= Feb 24 09:56:22 crc kubenswrapper[4755]: multi_network_enabled_flag= Feb 24 09:56:22 crc kubenswrapper[4755]: if [[ "true" == "true" ]]; then Feb 24 09:56:22 crc kubenswrapper[4755]: multi_network_enabled_flag="--enable-multi-network" Feb 24 09:56:22 crc kubenswrapper[4755]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 24 09:56:22 crc kubenswrapper[4755]: exec /usr/bin/ovnkube \ Feb 24 09:56:22 crc kubenswrapper[4755]: --enable-interconnect \ Feb 24 09:56:22 crc kubenswrapper[4755]: --init-cluster-manager "${K8S_NODE}" \ Feb 24 09:56:22 crc kubenswrapper[4755]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 24 09:56:22 crc kubenswrapper[4755]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 24 09:56:22 crc kubenswrapper[4755]: --metrics-bind-address "127.0.0.1:29108" \ Feb 24 09:56:22 crc kubenswrapper[4755]: --metrics-enable-pprof \ Feb 24 09:56:22 crc kubenswrapper[4755]: --metrics-enable-config-duration \ Feb 24 09:56:22 crc kubenswrapper[4755]: ${ovn_v4_join_subnet_opt} \ Feb 24 09:56:22 crc kubenswrapper[4755]: ${ovn_v6_join_subnet_opt} \ Feb 24 09:56:22 crc kubenswrapper[4755]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 24 09:56:22 crc kubenswrapper[4755]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 24 09:56:22 crc kubenswrapper[4755]: ${dns_name_resolver_enabled_flag} \ Feb 24 09:56:22 crc kubenswrapper[4755]: ${persistent_ips_enabled_flag} \ Feb 24 09:56:22 crc kubenswrapper[4755]: ${multi_network_enabled_flag} \ Feb 24 09:56:22 crc kubenswrapper[4755]: ${network_segmentation_enabled_flag} Feb 24 09:56:22 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h28np,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-85cjn_openshift-ovn-kubernetes(dec1056d-97aa-4dfc-a63d-d729dfdb88f5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:22 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.700373 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:22 crc kubenswrapper[4755]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 24 09:56:22 crc kubenswrapper[4755]: apiVersion: v1 Feb 24 09:56:22 crc kubenswrapper[4755]: clusters: Feb 24 09:56:22 crc kubenswrapper[4755]: - cluster: Feb 24 09:56:22 crc kubenswrapper[4755]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 24 09:56:22 crc kubenswrapper[4755]: server: https://api-int.crc.testing:6443 Feb 24 09:56:22 crc kubenswrapper[4755]: name: default-cluster Feb 24 09:56:22 crc kubenswrapper[4755]: contexts: Feb 24 09:56:22 crc kubenswrapper[4755]: - context: Feb 24 09:56:22 crc kubenswrapper[4755]: cluster: default-cluster Feb 24 09:56:22 crc kubenswrapper[4755]: namespace: default Feb 24 09:56:22 crc kubenswrapper[4755]: user: default-auth Feb 24 09:56:22 crc kubenswrapper[4755]: name: default-context Feb 24 09:56:22 crc kubenswrapper[4755]: current-context: default-context Feb 24 09:56:22 crc kubenswrapper[4755]: kind: Config Feb 24 09:56:22 crc kubenswrapper[4755]: preferences: {} Feb 24 09:56:22 crc kubenswrapper[4755]: users: Feb 24 09:56:22 crc kubenswrapper[4755]: - name: default-auth Feb 24 09:56:22 crc kubenswrapper[4755]: user: Feb 24 09:56:22 crc kubenswrapper[4755]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 24 09:56:22 crc kubenswrapper[4755]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 24 09:56:22 crc kubenswrapper[4755]: EOF Feb 24 09:56:22 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kxn7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:22 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.701015 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bxllg" event={"ID":"6ca669cb-3429-4187-bee6-232dbd316c67","Type":"ContainerStarted","Data":"d326a8e77efbbef3e3aa7abf858f81e5108fa33e019c9a03787678ebbe7aaa7b"} Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.701472 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" podUID="dec1056d-97aa-4dfc-a63d-d729dfdb88f5" Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.701558 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.701709 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:22 crc kubenswrapper[4755]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 24 09:56:22 crc kubenswrapper[4755]: set -uo pipefail Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 24 09:56:22 crc kubenswrapper[4755]: HOSTS_FILE="/etc/hosts" Feb 24 09:56:22 crc kubenswrapper[4755]: TEMP_FILE="/etc/hosts.tmp" Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: # Make a temporary file with the old hosts file's attributes. Feb 24 09:56:22 crc kubenswrapper[4755]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 24 09:56:22 crc kubenswrapper[4755]: echo "Failed to preserve hosts file. Exiting." Feb 24 09:56:22 crc kubenswrapper[4755]: exit 1 Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: while true; do Feb 24 09:56:22 crc kubenswrapper[4755]: declare -A svc_ips Feb 24 09:56:22 crc kubenswrapper[4755]: for svc in "${services[@]}"; do Feb 24 09:56:22 crc kubenswrapper[4755]: # Fetch service IP from cluster dns if present. We make several tries Feb 24 09:56:22 crc kubenswrapper[4755]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 24 09:56:22 crc kubenswrapper[4755]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 24 09:56:22 crc kubenswrapper[4755]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 24 09:56:22 crc kubenswrapper[4755]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:56:22 crc kubenswrapper[4755]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:56:22 crc kubenswrapper[4755]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:56:22 crc kubenswrapper[4755]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 24 09:56:22 crc kubenswrapper[4755]: for i in ${!cmds[*]} Feb 24 09:56:22 crc kubenswrapper[4755]: do Feb 24 09:56:22 crc kubenswrapper[4755]: ips=($(eval "${cmds[i]}")) Feb 24 09:56:22 crc kubenswrapper[4755]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 24 09:56:22 crc kubenswrapper[4755]: svc_ips["${svc}"]="${ips[@]}" Feb 24 09:56:22 crc kubenswrapper[4755]: break Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: done Feb 24 09:56:22 crc kubenswrapper[4755]: done Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: # Update /etc/hosts only if we get valid service IPs Feb 24 09:56:22 crc kubenswrapper[4755]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 24 09:56:22 crc kubenswrapper[4755]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 24 09:56:22 crc kubenswrapper[4755]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 24 09:56:22 crc kubenswrapper[4755]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 24 09:56:22 crc kubenswrapper[4755]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 24 09:56:22 crc kubenswrapper[4755]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 24 09:56:22 crc kubenswrapper[4755]: sleep 60 & wait Feb 24 09:56:22 crc kubenswrapper[4755]: continue Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: # Append resolver entries for services Feb 24 09:56:22 crc kubenswrapper[4755]: rc=0 Feb 24 09:56:22 crc kubenswrapper[4755]: for svc in "${!svc_ips[@]}"; do Feb 24 09:56:22 crc kubenswrapper[4755]: for ip in ${svc_ips[${svc}]}; do Feb 24 09:56:22 crc kubenswrapper[4755]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 24 09:56:22 crc kubenswrapper[4755]: done Feb 24 09:56:22 crc kubenswrapper[4755]: done Feb 24 09:56:22 crc kubenswrapper[4755]: if [[ $rc -ne 0 ]]; then Feb 24 09:56:22 crc kubenswrapper[4755]: sleep 60 & wait Feb 24 09:56:22 crc kubenswrapper[4755]: continue Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: Feb 24 09:56:22 crc kubenswrapper[4755]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 24 09:56:22 crc kubenswrapper[4755]: # Replace /etc/hosts with our modified version if needed Feb 24 09:56:22 crc kubenswrapper[4755]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 24 09:56:22 crc kubenswrapper[4755]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: sleep 60 & wait Feb 24 09:56:22 crc kubenswrapper[4755]: unset svc_ips Feb 24 09:56:22 crc kubenswrapper[4755]: done Feb 24 09:56:22 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzxz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-2cmfc_openshift-dns(04c132ba-c396-4f64-a02b-fcdae681ed74): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:22 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.702426 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" event={"ID":"f842f5c8-ff09-48b2-9805-ad9de28e2ea7","Type":"ContainerStarted","Data":"036410a344001cca9cea97ac17f46d1eba83f5139cfd8168fead212f442fa3c9"} Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.703299 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-2cmfc" podUID="04c132ba-c396-4f64-a02b-fcdae681ed74" Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.703440 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:22 crc kubenswrapper[4755]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 24 09:56:22 crc kubenswrapper[4755]: while [ true ]; Feb 24 09:56:22 crc kubenswrapper[4755]: do Feb 24 09:56:22 crc kubenswrapper[4755]: for f in $(ls /tmp/serviceca); do Feb 24 09:56:22 crc kubenswrapper[4755]: echo $f Feb 24 09:56:22 crc kubenswrapper[4755]: ca_file_path="/tmp/serviceca/${f}" Feb 24 09:56:22 crc kubenswrapper[4755]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 24 09:56:22 crc kubenswrapper[4755]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 24 09:56:22 crc kubenswrapper[4755]: if [ -e "${reg_dir_path}" ]; then Feb 24 09:56:22 crc kubenswrapper[4755]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 24 09:56:22 crc kubenswrapper[4755]: else Feb 24 09:56:22 crc kubenswrapper[4755]: mkdir $reg_dir_path Feb 24 09:56:22 crc kubenswrapper[4755]: cp $ca_file_path $reg_dir_path/ca.crt Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: done Feb 24 09:56:22 crc kubenswrapper[4755]: for d in $(ls /etc/docker/certs.d); do Feb 24 09:56:22 crc kubenswrapper[4755]: echo $d Feb 24 09:56:22 crc kubenswrapper[4755]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 24 09:56:22 crc kubenswrapper[4755]: reg_conf_path="/tmp/serviceca/${dp}" Feb 24 09:56:22 crc kubenswrapper[4755]: if [ ! -e "${reg_conf_path}" ]; then Feb 24 09:56:22 crc kubenswrapper[4755]: rm -rf /etc/docker/certs.d/$d Feb 24 09:56:22 crc kubenswrapper[4755]: fi Feb 24 09:56:22 crc kubenswrapper[4755]: done Feb 24 09:56:22 crc kubenswrapper[4755]: sleep 60 & wait ${!} Feb 24 09:56:22 crc kubenswrapper[4755]: done Feb 24 09:56:22 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bvck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-bxllg_openshift-image-registry(6ca669cb-3429-4187-bee6-232dbd316c67): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:22 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.704833 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpwjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-8t77m_openshift-multus(f842f5c8-ff09-48b2-9805-ad9de28e2ea7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.704883 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-bxllg" podUID="6ca669cb-3429-4187-bee6-232dbd316c67" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.705673 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"7f45a7873c12a758e324c76ee4dc8d2e3b6e4305508ccef9b2a7fccc17b71060"} Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.706200 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-8t77m" podUID="f842f5c8-ff09-48b2-9805-ad9de28e2ea7" Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.708676 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwtmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.708692 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.710931 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwtmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.712118 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.719408 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.728821 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.733563 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.733627 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.733651 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.733682 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.733707 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:22Z","lastTransitionTime":"2026-02-24T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.739014 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.747482 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.758841 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.769670 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.779035 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.791287 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.815591 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.833844 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.836332 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.836393 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.836407 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.836428 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.836442 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:22Z","lastTransitionTime":"2026-02-24T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.848428 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.866893 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.885452 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.900786 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.913528 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.928541 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.939490 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.939543 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.939560 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.939582 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.939599 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:22Z","lastTransitionTime":"2026-02-24T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.943577 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.953986 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.954179 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.954288 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.954352 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.954538 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.954628 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:24.954605764 +0000 UTC m=+89.411128347 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.955120 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:56:24.95510222 +0000 UTC m=+89.411624803 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.955202 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.955279 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:24.955264786 +0000 UTC m=+89.411787369 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.955391 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.955432 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.955459 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:22 crc kubenswrapper[4755]: E0224 09:56:22.955523 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:24.955502993 +0000 UTC m=+89.412025586 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.958791 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.972101 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.982010 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:22 crc kubenswrapper[4755]: I0224 09:56:22.997003 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.011193 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.025371 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.039467 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.042897 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.042939 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.042952 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.042976 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.042991 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:23Z","lastTransitionTime":"2026-02-24T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.053732 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.055380 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.055484 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:23 crc kubenswrapper[4755]: E0224 09:56:23.055626 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:56:23 crc kubenswrapper[4755]: E0224 09:56:23.055694 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:56:23 crc kubenswrapper[4755]: E0224 09:56:23.055720 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:23 crc kubenswrapper[4755]: E0224 09:56:23.055652 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:56:23 crc kubenswrapper[4755]: E0224 09:56:23.055794 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:25.055767335 +0000 UTC m=+89.512289918 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:23 crc kubenswrapper[4755]: E0224 09:56:23.055924 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs podName:82775556-3991-45ab-ac50-7ef81cafeaee nodeName:}" failed. No retries permitted until 2026-02-24 09:56:25.055888299 +0000 UTC m=+89.512411042 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs") pod "network-metrics-daemon-98t22" (UID: "82775556-3991-45ab-ac50-7ef81cafeaee") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.101821 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.136740 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.146188 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.146258 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.146268 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.146292 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.146306 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:23Z","lastTransitionTime":"2026-02-24T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.161941 4755 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.250028 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.250138 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.250159 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.250189 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.250207 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:23Z","lastTransitionTime":"2026-02-24T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.315802 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.315902 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.315817 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.315843 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:23 crc kubenswrapper[4755]: E0224 09:56:23.315991 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:23 crc kubenswrapper[4755]: E0224 09:56:23.316123 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:23 crc kubenswrapper[4755]: E0224 09:56:23.316335 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:23 crc kubenswrapper[4755]: E0224 09:56:23.316570 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.353776 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.353853 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.353877 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.353908 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.353934 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:23Z","lastTransitionTime":"2026-02-24T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.457933 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.457985 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.458002 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.458023 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.458041 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:23Z","lastTransitionTime":"2026-02-24T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.561774 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.561873 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.561891 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.561916 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.561935 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:23Z","lastTransitionTime":"2026-02-24T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.665229 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.665287 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.665311 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.665342 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.665365 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:23Z","lastTransitionTime":"2026-02-24T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.769030 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.769168 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.769221 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.769272 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.769294 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:23Z","lastTransitionTime":"2026-02-24T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.895457 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.895528 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.895551 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.895578 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.895606 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:23Z","lastTransitionTime":"2026-02-24T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.998638 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.998697 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.998707 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.998728 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:23 crc kubenswrapper[4755]: I0224 09:56:23.998742 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:23Z","lastTransitionTime":"2026-02-24T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.101629 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.101824 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.101859 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.101893 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.101918 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:24Z","lastTransitionTime":"2026-02-24T09:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.205879 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.205957 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.206027 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.206062 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.206122 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:24Z","lastTransitionTime":"2026-02-24T09:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.308887 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.308943 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.308961 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.308988 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.309009 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:24Z","lastTransitionTime":"2026-02-24T09:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.412541 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.412604 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.412622 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.412650 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.412669 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:24Z","lastTransitionTime":"2026-02-24T09:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.515965 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.516038 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.516098 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.516128 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.516145 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:24Z","lastTransitionTime":"2026-02-24T09:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.618996 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.619165 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.619192 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.619223 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.619245 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:24Z","lastTransitionTime":"2026-02-24T09:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.722334 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.722439 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.722527 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.722559 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.722576 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:24Z","lastTransitionTime":"2026-02-24T09:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.825949 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.826016 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.826040 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.826105 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.826130 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:24Z","lastTransitionTime":"2026-02-24T09:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.928815 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.928876 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.928896 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.928917 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.928936 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:24Z","lastTransitionTime":"2026-02-24T09:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.977161 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.977288 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:24 crc kubenswrapper[4755]: E0224 09:56:24.977309 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:56:28.977278378 +0000 UTC m=+93.433800951 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.977389 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:24 crc kubenswrapper[4755]: I0224 09:56:24.977463 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:24 crc kubenswrapper[4755]: E0224 09:56:24.977484 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:56:24 crc kubenswrapper[4755]: E0224 09:56:24.977578 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:56:24 crc kubenswrapper[4755]: E0224 09:56:24.977617 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:56:24 crc kubenswrapper[4755]: E0224 09:56:24.977645 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:24 crc kubenswrapper[4755]: E0224 09:56:24.977535 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:56:24 crc kubenswrapper[4755]: E0224 09:56:24.977663 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:28.97764323 +0000 UTC m=+93.434165803 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:56:24 crc kubenswrapper[4755]: E0224 09:56:24.977736 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:28.977715622 +0000 UTC m=+93.434238195 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:24 crc kubenswrapper[4755]: E0224 09:56:24.977758 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:28.977745942 +0000 UTC m=+93.434268515 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.031708 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.031778 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.031837 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.031871 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.031894 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:25Z","lastTransitionTime":"2026-02-24T09:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.078749 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.078840 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:25 crc kubenswrapper[4755]: E0224 09:56:25.079038 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:56:25 crc kubenswrapper[4755]: E0224 09:56:25.079100 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:56:25 crc kubenswrapper[4755]: E0224 09:56:25.079124 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:25 crc kubenswrapper[4755]: E0224 09:56:25.079204 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:29.079181392 +0000 UTC m=+93.535703965 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:25 crc kubenswrapper[4755]: E0224 09:56:25.079392 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:56:25 crc kubenswrapper[4755]: E0224 09:56:25.079444 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs podName:82775556-3991-45ab-ac50-7ef81cafeaee nodeName:}" failed. No retries permitted until 2026-02-24 09:56:29.07942851 +0000 UTC m=+93.535951083 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs") pod "network-metrics-daemon-98t22" (UID: "82775556-3991-45ab-ac50-7ef81cafeaee") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.134709 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.134772 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.134788 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.134813 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.134832 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:25Z","lastTransitionTime":"2026-02-24T09:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.238247 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.238306 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.238329 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.238357 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.238378 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:25Z","lastTransitionTime":"2026-02-24T09:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.316229 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.316232 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.316242 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.316365 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:25 crc kubenswrapper[4755]: E0224 09:56:25.316796 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:25 crc kubenswrapper[4755]: E0224 09:56:25.316583 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:25 crc kubenswrapper[4755]: E0224 09:56:25.316925 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:25 crc kubenswrapper[4755]: E0224 09:56:25.316390 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.341710 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.341788 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.341808 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.341838 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.341860 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:25Z","lastTransitionTime":"2026-02-24T09:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.445473 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.445536 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.445548 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.445569 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.445584 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:25Z","lastTransitionTime":"2026-02-24T09:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.548443 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.548522 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.548545 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.548574 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.548591 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:25Z","lastTransitionTime":"2026-02-24T09:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.650953 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.651176 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.651202 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.651227 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.651243 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:25Z","lastTransitionTime":"2026-02-24T09:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.754090 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.754161 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.754179 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.754205 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.754223 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:25Z","lastTransitionTime":"2026-02-24T09:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.857223 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.857300 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.857323 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.857350 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.857372 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:25Z","lastTransitionTime":"2026-02-24T09:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.960864 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.960931 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.960951 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.960977 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:25 crc kubenswrapper[4755]: I0224 09:56:25.960995 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:25Z","lastTransitionTime":"2026-02-24T09:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.064097 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.064160 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.064178 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.064204 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.064229 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:26Z","lastTransitionTime":"2026-02-24T09:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.167039 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.167184 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.167209 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.167241 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.167269 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:26Z","lastTransitionTime":"2026-02-24T09:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.270613 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.270675 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.270693 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.270719 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.270738 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:26Z","lastTransitionTime":"2026-02-24T09:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.330034 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.332283 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.332349 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.332371 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.332399 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.332423 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:26Z","lastTransitionTime":"2026-02-24T09:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.341031 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: E0224 09:56:26.346395 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.351273 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.351320 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.351339 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.351359 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.351375 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:26Z","lastTransitionTime":"2026-02-24T09:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:26 crc kubenswrapper[4755]: E0224 09:56:26.366606 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.369093 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.371986 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.372035 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.372058 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.372121 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.372145 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:26Z","lastTransitionTime":"2026-02-24T09:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.384032 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: E0224 09:56:26.386258 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.395846 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.396167 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.396290 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.396430 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.396565 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:26Z","lastTransitionTime":"2026-02-24T09:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.410657 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: E0224 09:56:26.415529 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.419945 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.419990 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.420008 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.420032 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.420049 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:26Z","lastTransitionTime":"2026-02-24T09:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.427891 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: E0224 09:56:26.432181 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: E0224 09:56:26.432490 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.434591 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.434662 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.434679 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.434705 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.434725 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:26Z","lastTransitionTime":"2026-02-24T09:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.442887 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.458608 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.475399 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.489322 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.502157 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.516995 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.529787 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.538043 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.538169 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.538193 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.538228 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.538252 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:26Z","lastTransitionTime":"2026-02-24T09:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.542853 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.641244 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.641303 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.641323 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.641348 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.641367 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:26Z","lastTransitionTime":"2026-02-24T09:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.743903 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.743990 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.744015 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.744046 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.744109 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:26Z","lastTransitionTime":"2026-02-24T09:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.847987 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.848349 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.848501 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.848646 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.848807 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:26Z","lastTransitionTime":"2026-02-24T09:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.952554 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.952628 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.952643 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.952668 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:26 crc kubenswrapper[4755]: I0224 09:56:26.952684 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:26Z","lastTransitionTime":"2026-02-24T09:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.056319 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.056388 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.056407 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.056432 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.056447 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:27Z","lastTransitionTime":"2026-02-24T09:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.159757 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.159827 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.159847 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.159880 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.159900 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:27Z","lastTransitionTime":"2026-02-24T09:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.263185 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.263291 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.263318 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.263357 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.263389 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:27Z","lastTransitionTime":"2026-02-24T09:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.315831 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.315889 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.315854 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:27 crc kubenswrapper[4755]: E0224 09:56:27.315992 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.316102 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:27 crc kubenswrapper[4755]: E0224 09:56:27.316208 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:27 crc kubenswrapper[4755]: E0224 09:56:27.316359 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:27 crc kubenswrapper[4755]: E0224 09:56:27.316498 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.366432 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.366744 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.366979 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.367234 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.367504 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:27Z","lastTransitionTime":"2026-02-24T09:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.471726 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.472003 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.472229 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.472375 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.472524 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:27Z","lastTransitionTime":"2026-02-24T09:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.577759 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.578162 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.578250 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.578367 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.578458 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:27Z","lastTransitionTime":"2026-02-24T09:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.682112 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.682507 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.682746 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.682952 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.683191 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:27Z","lastTransitionTime":"2026-02-24T09:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.787413 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.787499 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.787521 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.787552 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.787580 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:27Z","lastTransitionTime":"2026-02-24T09:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.891947 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.892015 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.892033 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.892060 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.892121 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:27Z","lastTransitionTime":"2026-02-24T09:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.995412 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.995472 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.995488 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.995515 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:27 crc kubenswrapper[4755]: I0224 09:56:27.995534 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:27Z","lastTransitionTime":"2026-02-24T09:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.098189 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.098259 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.098282 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.098307 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.098325 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:28Z","lastTransitionTime":"2026-02-24T09:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.200538 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.200596 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.200613 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.200636 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.200655 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:28Z","lastTransitionTime":"2026-02-24T09:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.303499 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.303575 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.303598 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.303623 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.303640 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:28Z","lastTransitionTime":"2026-02-24T09:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.406864 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.406917 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.406940 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.406968 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.406989 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:28Z","lastTransitionTime":"2026-02-24T09:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.510726 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.510813 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.510844 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.510876 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.510899 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:28Z","lastTransitionTime":"2026-02-24T09:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.613801 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.613882 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.613896 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.613920 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.613939 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:28Z","lastTransitionTime":"2026-02-24T09:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.717031 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.717138 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.717162 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.717194 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.717213 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:28Z","lastTransitionTime":"2026-02-24T09:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.819906 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.819963 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.819980 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.820003 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.820025 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:28Z","lastTransitionTime":"2026-02-24T09:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.922877 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.922939 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.922957 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.922981 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:28 crc kubenswrapper[4755]: I0224 09:56:28.923001 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:28Z","lastTransitionTime":"2026-02-24T09:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.021047 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.021211 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.021230 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:56:37.02120105 +0000 UTC m=+101.477723633 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.021289 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.021328 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.021446 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.021478 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.021523 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.021543 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.021565 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:37.021535261 +0000 UTC m=+101.478057844 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.021597 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:37.021582433 +0000 UTC m=+101.478105016 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.021463 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.021642 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:37.021631174 +0000 UTC m=+101.478153757 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.027943 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.027999 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.028022 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.028053 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.028106 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:29Z","lastTransitionTime":"2026-02-24T09:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.121898 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.121987 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.122119 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.122165 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.122225 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.122252 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.122196 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs podName:82775556-3991-45ab-ac50-7ef81cafeaee nodeName:}" failed. No retries permitted until 2026-02-24 09:56:37.122157864 +0000 UTC m=+101.578680417 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs") pod "network-metrics-daemon-98t22" (UID: "82775556-3991-45ab-ac50-7ef81cafeaee") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.122355 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:37.12232824 +0000 UTC m=+101.578850823 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.130251 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.130278 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.130289 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.130305 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.130315 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:29Z","lastTransitionTime":"2026-02-24T09:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.233062 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.233150 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.233168 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.233191 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.233209 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:29Z","lastTransitionTime":"2026-02-24T09:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.315974 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.316212 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.316222 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.316261 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.316297 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.316324 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.316494 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.316761 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.333047 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.333857 4755 scope.go:117] "RemoveContainer" containerID="efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6" Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.334144 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.337817 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.337926 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.337951 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.338051 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.338132 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:29Z","lastTransitionTime":"2026-02-24T09:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.441751 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.441834 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.441852 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.441876 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.441923 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:29Z","lastTransitionTime":"2026-02-24T09:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.545394 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.545455 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.545481 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.545512 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.545533 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:29Z","lastTransitionTime":"2026-02-24T09:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.648469 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.648519 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.648537 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.648562 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.648580 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:29Z","lastTransitionTime":"2026-02-24T09:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.726201 4755 scope.go:117] "RemoveContainer" containerID="efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6" Feb 24 09:56:29 crc kubenswrapper[4755]: E0224 09:56:29.726383 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.751146 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.751213 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.751234 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.751260 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.751277 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:29Z","lastTransitionTime":"2026-02-24T09:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.854590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.854658 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.854678 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.854705 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.854723 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:29Z","lastTransitionTime":"2026-02-24T09:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.957705 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.957784 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.957809 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.957838 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:29 crc kubenswrapper[4755]: I0224 09:56:29.957855 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:29Z","lastTransitionTime":"2026-02-24T09:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.061314 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.061372 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.061391 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.061415 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.061433 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:30Z","lastTransitionTime":"2026-02-24T09:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.164648 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.164696 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.164713 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.164737 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.164753 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:30Z","lastTransitionTime":"2026-02-24T09:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.269136 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.269226 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.269250 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.269279 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.269312 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:30Z","lastTransitionTime":"2026-02-24T09:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.377562 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.377623 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.377641 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.377665 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.377682 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:30Z","lastTransitionTime":"2026-02-24T09:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.481646 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.481703 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.481721 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.481744 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.481760 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:30Z","lastTransitionTime":"2026-02-24T09:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.585283 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.585356 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.585376 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.585402 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.585424 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:30Z","lastTransitionTime":"2026-02-24T09:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.688821 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.688899 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.688923 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.688953 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.688974 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:30Z","lastTransitionTime":"2026-02-24T09:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.792465 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.792534 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.792553 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.792581 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.792602 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:30Z","lastTransitionTime":"2026-02-24T09:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.895554 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.895619 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.895638 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.895662 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.895679 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:30Z","lastTransitionTime":"2026-02-24T09:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.999488 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.999574 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.999604 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.999634 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:30 crc kubenswrapper[4755]: I0224 09:56:30.999655 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:30Z","lastTransitionTime":"2026-02-24T09:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.103193 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.103300 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.103327 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.103359 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.103382 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:31Z","lastTransitionTime":"2026-02-24T09:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.206451 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.206543 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.206568 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.206596 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.206614 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:31Z","lastTransitionTime":"2026-02-24T09:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.311323 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.311409 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.311428 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.311454 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.311471 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:31Z","lastTransitionTime":"2026-02-24T09:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.315643 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.315736 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.315757 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:31 crc kubenswrapper[4755]: E0224 09:56:31.315830 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.315887 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:31 crc kubenswrapper[4755]: E0224 09:56:31.316031 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:31 crc kubenswrapper[4755]: E0224 09:56:31.316301 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:31 crc kubenswrapper[4755]: E0224 09:56:31.316386 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.414842 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.414894 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.414917 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.414945 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.414966 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:31Z","lastTransitionTime":"2026-02-24T09:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.517882 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.518229 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.518364 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.518490 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.518610 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:31Z","lastTransitionTime":"2026-02-24T09:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.621721 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.621784 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.621802 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.621828 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.621847 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:31Z","lastTransitionTime":"2026-02-24T09:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.725590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.725652 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.725669 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.725695 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.725713 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:31Z","lastTransitionTime":"2026-02-24T09:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.829207 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.829281 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.829297 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.829321 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.829339 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:31Z","lastTransitionTime":"2026-02-24T09:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.932744 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.932786 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.932796 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.932812 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:31 crc kubenswrapper[4755]: I0224 09:56:31.932822 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:31Z","lastTransitionTime":"2026-02-24T09:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.035450 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.035516 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.035533 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.035557 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.035573 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:32Z","lastTransitionTime":"2026-02-24T09:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.138867 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.138916 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.138929 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.138947 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.138960 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:32Z","lastTransitionTime":"2026-02-24T09:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.241766 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.241950 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.241981 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.242014 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.242041 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:32Z","lastTransitionTime":"2026-02-24T09:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.344999 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.345122 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.345152 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.345186 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.345208 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:32Z","lastTransitionTime":"2026-02-24T09:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.448299 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.448345 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.448357 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.448374 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.448388 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:32Z","lastTransitionTime":"2026-02-24T09:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.551862 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.551911 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.551928 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.551949 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.551966 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:32Z","lastTransitionTime":"2026-02-24T09:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.654513 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.654590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.654625 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.654658 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.654696 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:32Z","lastTransitionTime":"2026-02-24T09:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.757459 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.757520 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.757537 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.757560 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.757577 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:32Z","lastTransitionTime":"2026-02-24T09:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.861490 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.861561 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.861575 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.861593 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.861605 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:32Z","lastTransitionTime":"2026-02-24T09:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.964880 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.964938 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.964961 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.964992 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:32 crc kubenswrapper[4755]: I0224 09:56:32.965013 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:32Z","lastTransitionTime":"2026-02-24T09:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.196151 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.196191 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.196209 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.196232 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.196246 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:33Z","lastTransitionTime":"2026-02-24T09:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.299352 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.299410 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.299427 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.299450 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.299468 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:33Z","lastTransitionTime":"2026-02-24T09:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.315769 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.315809 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.315884 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.315783 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:33 crc kubenswrapper[4755]: E0224 09:56:33.315968 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:33 crc kubenswrapper[4755]: E0224 09:56:33.316097 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:33 crc kubenswrapper[4755]: E0224 09:56:33.316261 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:33 crc kubenswrapper[4755]: E0224 09:56:33.316618 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:33 crc kubenswrapper[4755]: E0224 09:56:33.318845 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:33 crc kubenswrapper[4755]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 09:56:33 crc kubenswrapper[4755]: if [[ -f "/env/_master" ]]; then Feb 24 09:56:33 crc kubenswrapper[4755]: set -o allexport Feb 24 09:56:33 crc kubenswrapper[4755]: source "/env/_master" Feb 24 09:56:33 crc kubenswrapper[4755]: set +o allexport Feb 24 09:56:33 crc kubenswrapper[4755]: fi Feb 24 09:56:33 crc kubenswrapper[4755]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 24 09:56:33 crc kubenswrapper[4755]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 24 09:56:33 crc kubenswrapper[4755]: ho_enable="--enable-hybrid-overlay" Feb 24 09:56:33 crc kubenswrapper[4755]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 24 09:56:33 crc kubenswrapper[4755]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 24 09:56:33 crc kubenswrapper[4755]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 24 09:56:33 crc kubenswrapper[4755]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 09:56:33 crc kubenswrapper[4755]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 24 09:56:33 crc kubenswrapper[4755]: --webhook-host=127.0.0.1 \ Feb 24 09:56:33 crc kubenswrapper[4755]: --webhook-port=9743 \ Feb 24 09:56:33 crc kubenswrapper[4755]: ${ho_enable} \ Feb 24 09:56:33 crc kubenswrapper[4755]: --enable-interconnect \ Feb 24 09:56:33 crc kubenswrapper[4755]: --disable-approver \ Feb 24 09:56:33 crc kubenswrapper[4755]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 24 09:56:33 crc kubenswrapper[4755]: --wait-for-kubernetes-api=200s \ Feb 24 09:56:33 crc kubenswrapper[4755]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 24 09:56:33 crc kubenswrapper[4755]: --loglevel="${LOGLEVEL}" Feb 24 09:56:33 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:33 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:33 crc kubenswrapper[4755]: E0224 09:56:33.322299 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:33 crc kubenswrapper[4755]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 09:56:33 crc kubenswrapper[4755]: if [[ -f "/env/_master" ]]; then Feb 24 09:56:33 crc kubenswrapper[4755]: set -o allexport Feb 24 09:56:33 crc kubenswrapper[4755]: source "/env/_master" Feb 24 09:56:33 crc kubenswrapper[4755]: set +o allexport Feb 24 09:56:33 crc kubenswrapper[4755]: fi Feb 24 09:56:33 crc kubenswrapper[4755]: Feb 24 09:56:33 crc kubenswrapper[4755]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 24 09:56:33 crc kubenswrapper[4755]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 09:56:33 crc kubenswrapper[4755]: --disable-webhook \ Feb 24 09:56:33 crc kubenswrapper[4755]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 24 09:56:33 crc kubenswrapper[4755]: --loglevel="${LOGLEVEL}" Feb 24 09:56:33 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:33 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:33 crc kubenswrapper[4755]: E0224 09:56:33.323521 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.401596 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.401651 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.401667 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.401690 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.401709 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:33Z","lastTransitionTime":"2026-02-24T09:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.505722 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.505757 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.505767 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.505782 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.505791 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:33Z","lastTransitionTime":"2026-02-24T09:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.608879 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.608963 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.608987 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.609018 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.609041 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:33Z","lastTransitionTime":"2026-02-24T09:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.712451 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.712575 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.712597 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.712639 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.712661 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:33Z","lastTransitionTime":"2026-02-24T09:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.816234 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.816291 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.816305 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.816327 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.816342 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:33Z","lastTransitionTime":"2026-02-24T09:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.920480 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.920679 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.920750 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.920785 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:33 crc kubenswrapper[4755]: I0224 09:56:33.920855 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:33Z","lastTransitionTime":"2026-02-24T09:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.023641 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.023727 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.023737 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.023757 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.023769 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:34Z","lastTransitionTime":"2026-02-24T09:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.127212 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.127277 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.127297 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.127324 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.127341 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:34Z","lastTransitionTime":"2026-02-24T09:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.231153 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.231225 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.231249 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.231282 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.231308 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:34Z","lastTransitionTime":"2026-02-24T09:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:34 crc kubenswrapper[4755]: E0224 09:56:34.318956 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:34 crc kubenswrapper[4755]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 24 09:56:34 crc kubenswrapper[4755]: set -uo pipefail Feb 24 09:56:34 crc kubenswrapper[4755]: Feb 24 09:56:34 crc kubenswrapper[4755]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 24 09:56:34 crc kubenswrapper[4755]: Feb 24 09:56:34 crc kubenswrapper[4755]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 24 09:56:34 crc kubenswrapper[4755]: HOSTS_FILE="/etc/hosts" Feb 24 09:56:34 crc kubenswrapper[4755]: TEMP_FILE="/etc/hosts.tmp" Feb 24 09:56:34 crc kubenswrapper[4755]: Feb 24 09:56:34 crc kubenswrapper[4755]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 24 09:56:34 crc kubenswrapper[4755]: Feb 24 09:56:34 crc kubenswrapper[4755]: # Make a temporary file with the old hosts file's attributes. Feb 24 09:56:34 crc kubenswrapper[4755]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 24 09:56:34 crc kubenswrapper[4755]: echo "Failed to preserve hosts file. Exiting." Feb 24 09:56:34 crc kubenswrapper[4755]: exit 1 Feb 24 09:56:34 crc kubenswrapper[4755]: fi Feb 24 09:56:34 crc kubenswrapper[4755]: Feb 24 09:56:34 crc kubenswrapper[4755]: while true; do Feb 24 09:56:34 crc kubenswrapper[4755]: declare -A svc_ips Feb 24 09:56:34 crc kubenswrapper[4755]: for svc in "${services[@]}"; do Feb 24 09:56:34 crc kubenswrapper[4755]: # Fetch service IP from cluster dns if present. We make several tries Feb 24 09:56:34 crc kubenswrapper[4755]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 24 09:56:34 crc kubenswrapper[4755]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 24 09:56:34 crc kubenswrapper[4755]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 24 09:56:34 crc kubenswrapper[4755]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:56:34 crc kubenswrapper[4755]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:56:34 crc kubenswrapper[4755]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 09:56:34 crc kubenswrapper[4755]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 24 09:56:34 crc kubenswrapper[4755]: for i in ${!cmds[*]} Feb 24 09:56:34 crc kubenswrapper[4755]: do Feb 24 09:56:34 crc kubenswrapper[4755]: ips=($(eval "${cmds[i]}")) Feb 24 09:56:34 crc kubenswrapper[4755]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 24 09:56:34 crc kubenswrapper[4755]: svc_ips["${svc}"]="${ips[@]}" Feb 24 09:56:34 crc kubenswrapper[4755]: break Feb 24 09:56:34 crc kubenswrapper[4755]: fi Feb 24 09:56:34 crc kubenswrapper[4755]: done Feb 24 09:56:34 crc kubenswrapper[4755]: done Feb 24 09:56:34 crc kubenswrapper[4755]: Feb 24 09:56:34 crc kubenswrapper[4755]: # Update /etc/hosts only if we get valid service IPs Feb 24 09:56:34 crc kubenswrapper[4755]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 24 09:56:34 crc kubenswrapper[4755]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 24 09:56:34 crc kubenswrapper[4755]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 24 09:56:34 crc kubenswrapper[4755]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 24 09:56:34 crc kubenswrapper[4755]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 24 09:56:34 crc kubenswrapper[4755]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 24 09:56:34 crc kubenswrapper[4755]: sleep 60 & wait Feb 24 09:56:34 crc kubenswrapper[4755]: continue Feb 24 09:56:34 crc kubenswrapper[4755]: fi Feb 24 09:56:34 crc kubenswrapper[4755]: Feb 24 09:56:34 crc kubenswrapper[4755]: # Append resolver entries for services Feb 24 09:56:34 crc kubenswrapper[4755]: rc=0 Feb 24 09:56:34 crc kubenswrapper[4755]: for svc in "${!svc_ips[@]}"; do Feb 24 09:56:34 crc kubenswrapper[4755]: for ip in ${svc_ips[${svc}]}; do Feb 24 09:56:34 crc kubenswrapper[4755]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 24 09:56:34 crc kubenswrapper[4755]: done Feb 24 09:56:34 crc kubenswrapper[4755]: done Feb 24 09:56:34 crc kubenswrapper[4755]: if [[ $rc -ne 0 ]]; then Feb 24 09:56:34 crc kubenswrapper[4755]: sleep 60 & wait Feb 24 09:56:34 crc kubenswrapper[4755]: continue Feb 24 09:56:34 crc kubenswrapper[4755]: fi Feb 24 09:56:34 crc kubenswrapper[4755]: Feb 24 09:56:34 crc kubenswrapper[4755]: Feb 24 09:56:34 crc kubenswrapper[4755]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 24 09:56:34 crc kubenswrapper[4755]: # Replace /etc/hosts with our modified version if needed Feb 24 09:56:34 crc kubenswrapper[4755]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 24 09:56:34 crc kubenswrapper[4755]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 24 09:56:34 crc kubenswrapper[4755]: fi Feb 24 09:56:34 crc kubenswrapper[4755]: sleep 60 & wait Feb 24 09:56:34 crc kubenswrapper[4755]: unset svc_ips Feb 24 09:56:34 crc kubenswrapper[4755]: done Feb 24 09:56:34 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzxz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-2cmfc_openshift-dns(04c132ba-c396-4f64-a02b-fcdae681ed74): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:34 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:34 crc kubenswrapper[4755]: E0224 09:56:34.320102 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:34 crc kubenswrapper[4755]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 24 09:56:34 crc kubenswrapper[4755]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 24 09:56:34 crc kubenswrapper[4755]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w29tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-dwm6v_openshift-multus(79ca0953-3a40-45a2-9305-02272f036006): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:34 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:34 crc kubenswrapper[4755]: E0224 09:56:34.320132 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-2cmfc" podUID="04c132ba-c396-4f64-a02b-fcdae681ed74" Feb 24 09:56:34 crc kubenswrapper[4755]: E0224 09:56:34.321599 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-dwm6v" podUID="79ca0953-3a40-45a2-9305-02272f036006" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.334623 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.334760 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.334802 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.334828 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.334843 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:34Z","lastTransitionTime":"2026-02-24T09:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.437756 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.437822 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.437840 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.437866 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.437885 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:34Z","lastTransitionTime":"2026-02-24T09:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.540730 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.540864 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.540882 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.540906 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.540924 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:34Z","lastTransitionTime":"2026-02-24T09:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.644832 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.644878 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.644894 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.644921 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.644939 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:34Z","lastTransitionTime":"2026-02-24T09:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.749725 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.749783 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.749800 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.749826 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.749844 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:34Z","lastTransitionTime":"2026-02-24T09:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.852432 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.852552 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.852570 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.852596 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.852615 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:34Z","lastTransitionTime":"2026-02-24T09:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.956328 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.956391 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.956408 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.956434 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:34 crc kubenswrapper[4755]: I0224 09:56:34.956452 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:34Z","lastTransitionTime":"2026-02-24T09:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.059813 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.059878 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.059900 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.059923 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.059941 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:35Z","lastTransitionTime":"2026-02-24T09:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.163389 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.163443 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.163461 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.163484 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.163502 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:35Z","lastTransitionTime":"2026-02-24T09:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.267265 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.267344 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.267370 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.267405 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.267429 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:35Z","lastTransitionTime":"2026-02-24T09:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.315640 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.316353 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.316458 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.316507 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:35 crc kubenswrapper[4755]: E0224 09:56:35.316644 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:35 crc kubenswrapper[4755]: E0224 09:56:35.316945 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:35 crc kubenswrapper[4755]: E0224 09:56:35.318037 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:35 crc kubenswrapper[4755]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 24 09:56:35 crc kubenswrapper[4755]: while [ true ]; Feb 24 09:56:35 crc kubenswrapper[4755]: do Feb 24 09:56:35 crc kubenswrapper[4755]: for f in $(ls /tmp/serviceca); do Feb 24 09:56:35 crc kubenswrapper[4755]: echo $f Feb 24 09:56:35 crc kubenswrapper[4755]: ca_file_path="/tmp/serviceca/${f}" Feb 24 09:56:35 crc kubenswrapper[4755]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 24 09:56:35 crc kubenswrapper[4755]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 24 09:56:35 crc kubenswrapper[4755]: if [ -e "${reg_dir_path}" ]; then Feb 24 09:56:35 crc kubenswrapper[4755]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 24 09:56:35 crc kubenswrapper[4755]: else Feb 24 09:56:35 crc kubenswrapper[4755]: mkdir $reg_dir_path Feb 24 09:56:35 crc kubenswrapper[4755]: cp $ca_file_path $reg_dir_path/ca.crt Feb 24 09:56:35 crc kubenswrapper[4755]: fi Feb 24 09:56:35 crc kubenswrapper[4755]: done Feb 24 09:56:35 crc kubenswrapper[4755]: for d in $(ls /etc/docker/certs.d); do Feb 24 09:56:35 crc kubenswrapper[4755]: echo $d Feb 24 09:56:35 crc kubenswrapper[4755]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 24 09:56:35 crc kubenswrapper[4755]: reg_conf_path="/tmp/serviceca/${dp}" Feb 24 09:56:35 crc kubenswrapper[4755]: if [ ! -e "${reg_conf_path}" ]; then Feb 24 09:56:35 crc kubenswrapper[4755]: rm -rf /etc/docker/certs.d/$d Feb 24 09:56:35 crc kubenswrapper[4755]: fi Feb 24 09:56:35 crc kubenswrapper[4755]: done Feb 24 09:56:35 crc kubenswrapper[4755]: sleep 60 & wait ${!} Feb 24 09:56:35 crc kubenswrapper[4755]: done Feb 24 09:56:35 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bvck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-bxllg_openshift-image-registry(6ca669cb-3429-4187-bee6-232dbd316c67): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:35 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:35 crc kubenswrapper[4755]: E0224 09:56:35.319016 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:35 crc kubenswrapper[4755]: E0224 09:56:35.319207 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:35 crc kubenswrapper[4755]: E0224 09:56:35.319393 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:35 crc kubenswrapper[4755]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 24 09:56:35 crc kubenswrapper[4755]: apiVersion: v1 Feb 24 09:56:35 crc kubenswrapper[4755]: clusters: Feb 24 09:56:35 crc kubenswrapper[4755]: - cluster: Feb 24 09:56:35 crc kubenswrapper[4755]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 24 09:56:35 crc kubenswrapper[4755]: server: https://api-int.crc.testing:6443 Feb 24 09:56:35 crc kubenswrapper[4755]: name: default-cluster Feb 24 09:56:35 crc kubenswrapper[4755]: contexts: Feb 24 09:56:35 crc kubenswrapper[4755]: - context: Feb 24 09:56:35 crc kubenswrapper[4755]: cluster: default-cluster Feb 24 09:56:35 crc kubenswrapper[4755]: namespace: default Feb 24 09:56:35 crc kubenswrapper[4755]: user: default-auth Feb 24 09:56:35 crc kubenswrapper[4755]: name: default-context Feb 24 09:56:35 crc kubenswrapper[4755]: current-context: default-context Feb 24 09:56:35 crc kubenswrapper[4755]: kind: Config Feb 24 09:56:35 crc kubenswrapper[4755]: preferences: {} Feb 24 09:56:35 crc kubenswrapper[4755]: users: Feb 24 09:56:35 crc kubenswrapper[4755]: - name: default-auth Feb 24 09:56:35 crc kubenswrapper[4755]: user: Feb 24 09:56:35 crc kubenswrapper[4755]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 24 09:56:35 crc kubenswrapper[4755]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 24 09:56:35 crc kubenswrapper[4755]: EOF Feb 24 09:56:35 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kxn7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:35 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:35 crc kubenswrapper[4755]: E0224 09:56:35.319419 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-bxllg" podUID="6ca669cb-3429-4187-bee6-232dbd316c67" Feb 24 09:56:35 crc kubenswrapper[4755]: E0224 09:56:35.320968 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" Feb 24 09:56:35 crc kubenswrapper[4755]: E0224 09:56:35.321310 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 09:56:35 crc kubenswrapper[4755]: E0224 09:56:35.323217 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.370823 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.371186 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.371372 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.371521 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.371686 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:35Z","lastTransitionTime":"2026-02-24T09:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.475291 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.475364 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.475388 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.475419 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.475443 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:35Z","lastTransitionTime":"2026-02-24T09:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.578347 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.578392 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.578410 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.578435 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.578452 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:35Z","lastTransitionTime":"2026-02-24T09:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.681396 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.681456 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.681473 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.681499 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.681518 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:35Z","lastTransitionTime":"2026-02-24T09:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.784274 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.784345 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.784368 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.784398 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.784422 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:35Z","lastTransitionTime":"2026-02-24T09:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.887733 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.887797 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.887816 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.887842 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.887860 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:35Z","lastTransitionTime":"2026-02-24T09:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.990893 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.990997 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.991027 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.991116 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:35 crc kubenswrapper[4755]: I0224 09:56:35.991151 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:35Z","lastTransitionTime":"2026-02-24T09:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.094651 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.094723 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.094744 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.094770 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.094787 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:36Z","lastTransitionTime":"2026-02-24T09:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.199143 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.199213 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.199299 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.199327 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.199352 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:36Z","lastTransitionTime":"2026-02-24T09:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.302671 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.302729 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.302743 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.302768 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.302783 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:36Z","lastTransitionTime":"2026-02-24T09:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:36 crc kubenswrapper[4755]: E0224 09:56:36.319314 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 09:56:36 crc kubenswrapper[4755]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 24 09:56:36 crc kubenswrapper[4755]: set -o allexport Feb 24 09:56:36 crc kubenswrapper[4755]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 24 09:56:36 crc kubenswrapper[4755]: source /etc/kubernetes/apiserver-url.env Feb 24 09:56:36 crc kubenswrapper[4755]: else Feb 24 09:56:36 crc kubenswrapper[4755]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 24 09:56:36 crc kubenswrapper[4755]: exit 1 Feb 24 09:56:36 crc kubenswrapper[4755]: fi Feb 24 09:56:36 crc kubenswrapper[4755]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 24 09:56:36 crc kubenswrapper[4755]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 09:56:36 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 09:56:36 crc kubenswrapper[4755]: E0224 09:56:36.322239 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.334707 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.349200 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.364730 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.381947 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.397198 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.406686 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.406782 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.406810 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.406835 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.406852 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:36Z","lastTransitionTime":"2026-02-24T09:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.409599 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.426143 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.439798 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.453418 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.479131 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.497601 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.508791 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.508836 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.508849 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.508867 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.508879 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:36Z","lastTransitionTime":"2026-02-24T09:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.516418 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.530406 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.546761 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.565193 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.594057 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.594124 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.594139 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.594161 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.594177 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:36Z","lastTransitionTime":"2026-02-24T09:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:36 crc kubenswrapper[4755]: E0224 09:56:36.610339 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.614946 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.614984 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.614996 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.615014 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.615025 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:36Z","lastTransitionTime":"2026-02-24T09:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:36 crc kubenswrapper[4755]: E0224 09:56:36.629415 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.638657 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.638725 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.638737 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.638757 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.638770 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:36Z","lastTransitionTime":"2026-02-24T09:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:36 crc kubenswrapper[4755]: E0224 09:56:36.654836 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.659782 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.659811 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.659820 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.659833 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.659844 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:36Z","lastTransitionTime":"2026-02-24T09:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:36 crc kubenswrapper[4755]: E0224 09:56:36.672376 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.677135 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.677181 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.677198 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.677221 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.677238 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:36Z","lastTransitionTime":"2026-02-24T09:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:36 crc kubenswrapper[4755]: E0224 09:56:36.693950 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:36 crc kubenswrapper[4755]: E0224 09:56:36.694272 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.696264 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.696307 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.696325 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.696347 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.696364 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:36Z","lastTransitionTime":"2026-02-24T09:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.799768 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.799856 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.799878 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.799909 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.799936 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:36Z","lastTransitionTime":"2026-02-24T09:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.908814 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.908872 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.908901 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.908929 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:36 crc kubenswrapper[4755]: I0224 09:56:36.908947 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:36Z","lastTransitionTime":"2026-02-24T09:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.011733 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.011798 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.011816 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.011843 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.011863 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:37Z","lastTransitionTime":"2026-02-24T09:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.016588 4755 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.114810 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.114993 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.115114 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:56:53.115033226 +0000 UTC m=+117.571555809 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.115180 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.115230 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.115273 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.115279 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.115297 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.115325 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.115339 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.115380 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.115376 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.115427 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.115345 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:37Z","lastTransitionTime":"2026-02-24T09:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.115406 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.115506 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:53.11547096 +0000 UTC m=+117.571993603 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.115714 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:53.115665626 +0000 UTC m=+117.572188209 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.115745 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:53.115731398 +0000 UTC m=+117.572253981 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.216957 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.217046 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.217222 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.217275 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.217310 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.217324 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs podName:82775556-3991-45ab-ac50-7ef81cafeaee nodeName:}" failed. No retries permitted until 2026-02-24 09:56:53.217295522 +0000 UTC m=+117.673818105 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs") pod "network-metrics-daemon-98t22" (UID: "82775556-3991-45ab-ac50-7ef81cafeaee") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.217331 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.217410 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:56:53.217385135 +0000 UTC m=+117.673907718 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.218653 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.218987 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.219012 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.219037 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.219136 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:37Z","lastTransitionTime":"2026-02-24T09:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.309767 4755 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.315902 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.315952 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.315926 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.316128 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.316165 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.317541 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.317700 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:37 crc kubenswrapper[4755]: E0224 09:56:37.317919 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.321996 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.322050 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.322109 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.322139 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.322159 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:37Z","lastTransitionTime":"2026-02-24T09:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.425850 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.426254 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.426268 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.426303 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.426316 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:37Z","lastTransitionTime":"2026-02-24T09:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.529322 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.529361 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.529372 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.529389 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.529400 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:37Z","lastTransitionTime":"2026-02-24T09:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.632000 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.632054 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.632075 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.632113 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.632127 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:37Z","lastTransitionTime":"2026-02-24T09:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.734949 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.734993 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.735006 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.735030 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.735043 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:37Z","lastTransitionTime":"2026-02-24T09:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.753022 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" event={"ID":"dec1056d-97aa-4dfc-a63d-d729dfdb88f5","Type":"ContainerStarted","Data":"a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.753120 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" event={"ID":"dec1056d-97aa-4dfc-a63d-d729dfdb88f5","Type":"ContainerStarted","Data":"0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.754890 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" event={"ID":"f842f5c8-ff09-48b2-9805-ad9de28e2ea7","Type":"ContainerStarted","Data":"0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.758547 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.758603 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.768019 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.781279 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.793112 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.827047 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.838161 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.838207 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.838224 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.838245 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.838262 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:37Z","lastTransitionTime":"2026-02-24T09:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.845376 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.861595 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.880789 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.897971 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.915490 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.930066 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.940981 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.941036 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.941054 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.941111 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.941130 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:37Z","lastTransitionTime":"2026-02-24T09:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.942935 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.956291 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.971277 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.983570 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:37 crc kubenswrapper[4755]: I0224 09:56:37.994450 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.007393 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.021852 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.035192 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.044389 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.044437 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.044469 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.044493 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.044506 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:38Z","lastTransitionTime":"2026-02-24T09:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.048692 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.062720 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.075208 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.084557 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.112641 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.128310 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.142167 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.147343 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.147412 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.147435 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.147468 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.147494 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:38Z","lastTransitionTime":"2026-02-24T09:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.155470 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.168256 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.184841 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.199227 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.211705 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.249947 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.250003 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.250016 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.250040 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.250052 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:38Z","lastTransitionTime":"2026-02-24T09:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.353324 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.353371 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.353382 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.353403 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.353415 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:38Z","lastTransitionTime":"2026-02-24T09:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.457888 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.458494 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.458506 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.458531 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.458544 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:38Z","lastTransitionTime":"2026-02-24T09:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.562230 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.562299 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.562309 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.562331 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.562343 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:38Z","lastTransitionTime":"2026-02-24T09:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.664605 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.664686 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.664704 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.664732 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.664756 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:38Z","lastTransitionTime":"2026-02-24T09:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.764541 4755 generic.go:334] "Generic (PLEG): container finished" podID="f842f5c8-ff09-48b2-9805-ad9de28e2ea7" containerID="0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c" exitCode=0 Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.764606 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" event={"ID":"f842f5c8-ff09-48b2-9805-ad9de28e2ea7","Type":"ContainerDied","Data":"0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c"} Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.766904 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.766939 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.766949 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.766965 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.766979 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:38Z","lastTransitionTime":"2026-02-24T09:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.776719 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.787488 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.806445 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.815740 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.827456 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.836332 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.844828 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.860465 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.871360 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.871413 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.871432 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.871458 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.871475 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:38Z","lastTransitionTime":"2026-02-24T09:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.872907 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.887645 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.906157 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.916625 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.927648 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.937392 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.944136 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.974273 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.974355 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.974381 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.974416 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:38 crc kubenswrapper[4755]: I0224 09:56:38.974441 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:38Z","lastTransitionTime":"2026-02-24T09:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.077355 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.077416 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.077435 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.077462 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.077481 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:39Z","lastTransitionTime":"2026-02-24T09:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.180864 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.180912 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.180927 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.180948 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.180964 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:39Z","lastTransitionTime":"2026-02-24T09:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.286419 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.287091 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.287117 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.287148 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.287168 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:39Z","lastTransitionTime":"2026-02-24T09:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.316349 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.316406 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.316444 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.316561 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:39 crc kubenswrapper[4755]: E0224 09:56:39.316587 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:39 crc kubenswrapper[4755]: E0224 09:56:39.316787 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:39 crc kubenswrapper[4755]: E0224 09:56:39.316915 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:39 crc kubenswrapper[4755]: E0224 09:56:39.316990 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.337921 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.391191 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.391248 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.391265 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.391289 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.391313 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:39Z","lastTransitionTime":"2026-02-24T09:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.494915 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.494976 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.495020 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.495046 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.495062 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:39Z","lastTransitionTime":"2026-02-24T09:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.598717 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.598770 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.598782 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.598812 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.598826 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:39Z","lastTransitionTime":"2026-02-24T09:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.701589 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.701648 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.701666 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.701690 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.701708 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:39Z","lastTransitionTime":"2026-02-24T09:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.771149 4755 generic.go:334] "Generic (PLEG): container finished" podID="f842f5c8-ff09-48b2-9805-ad9de28e2ea7" containerID="a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6" exitCode=0 Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.771286 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" event={"ID":"f842f5c8-ff09-48b2-9805-ad9de28e2ea7","Type":"ContainerDied","Data":"a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6"} Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.786464 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.799949 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.805614 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.806053 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.806118 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.806148 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.806169 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:39Z","lastTransitionTime":"2026-02-24T09:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.827409 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.845366 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.861983 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.879217 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.896196 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.909422 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.909476 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.909495 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.909523 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.909543 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:39Z","lastTransitionTime":"2026-02-24T09:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.911491 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.926122 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.937616 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.955292 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.974294 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.986329 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:39 crc kubenswrapper[4755]: I0224 09:56:39.996708 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.007303 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.017299 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.017360 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.017378 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.017403 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.017420 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:40Z","lastTransitionTime":"2026-02-24T09:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.020812 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.125303 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.125363 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.125383 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.125408 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.125426 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:40Z","lastTransitionTime":"2026-02-24T09:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.228717 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.228769 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.228780 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.228798 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.228807 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:40Z","lastTransitionTime":"2026-02-24T09:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.331885 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.331944 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.331962 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.331985 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.332004 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:40Z","lastTransitionTime":"2026-02-24T09:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.435820 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.435899 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.435937 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.435969 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.435993 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:40Z","lastTransitionTime":"2026-02-24T09:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.538882 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.538918 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.538929 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.538948 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.538960 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:40Z","lastTransitionTime":"2026-02-24T09:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.641393 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.641909 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.641939 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.641971 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.641994 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:40Z","lastTransitionTime":"2026-02-24T09:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.745564 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.745633 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.745653 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.745679 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.745696 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:40Z","lastTransitionTime":"2026-02-24T09:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.778100 4755 generic.go:334] "Generic (PLEG): container finished" podID="f842f5c8-ff09-48b2-9805-ad9de28e2ea7" containerID="c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee" exitCode=0 Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.778165 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" event={"ID":"f842f5c8-ff09-48b2-9805-ad9de28e2ea7","Type":"ContainerDied","Data":"c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee"} Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.796985 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.818638 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.847434 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.876589 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.876643 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.876659 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.876683 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.876702 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:40Z","lastTransitionTime":"2026-02-24T09:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.883985 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.906111 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.915770 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.923688 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.936158 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.948047 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.966968 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.980091 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.980353 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.980378 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.980389 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.980406 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.980419 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:40Z","lastTransitionTime":"2026-02-24T09:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:40 crc kubenswrapper[4755]: I0224 09:56:40.991001 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.001307 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.010346 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.020147 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.028868 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.082793 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.082842 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.082857 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.082879 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.082896 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:41Z","lastTransitionTime":"2026-02-24T09:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.185584 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.185642 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.185660 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.185685 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.185703 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:41Z","lastTransitionTime":"2026-02-24T09:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.289271 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.289338 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.289363 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.289394 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.289412 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:41Z","lastTransitionTime":"2026-02-24T09:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.315446 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.315500 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.315551 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.315862 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:41 crc kubenswrapper[4755]: E0224 09:56:41.316288 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:41 crc kubenswrapper[4755]: E0224 09:56:41.316428 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.316475 4755 scope.go:117] "RemoveContainer" containerID="efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6" Feb 24 09:56:41 crc kubenswrapper[4755]: E0224 09:56:41.316749 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:41 crc kubenswrapper[4755]: E0224 09:56:41.316637 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:41 crc kubenswrapper[4755]: E0224 09:56:41.317161 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.391811 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.391868 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.391891 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.391924 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.391948 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:41Z","lastTransitionTime":"2026-02-24T09:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.494795 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.494854 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.494873 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.494897 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.494914 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:41Z","lastTransitionTime":"2026-02-24T09:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.598900 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.598959 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.599003 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.599026 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.599048 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:41Z","lastTransitionTime":"2026-02-24T09:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.701916 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.701973 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.701990 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.702012 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.702029 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:41Z","lastTransitionTime":"2026-02-24T09:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.792825 4755 generic.go:334] "Generic (PLEG): container finished" podID="f842f5c8-ff09-48b2-9805-ad9de28e2ea7" containerID="e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c" exitCode=0 Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.792939 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" event={"ID":"f842f5c8-ff09-48b2-9805-ad9de28e2ea7","Type":"ContainerDied","Data":"e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c"} Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.804738 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.804810 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.804831 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.804863 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.804890 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:41Z","lastTransitionTime":"2026-02-24T09:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.809704 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.824110 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.852562 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.866597 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.883831 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.895706 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.908300 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.908648 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.908817 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.908836 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.908864 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.908882 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:41Z","lastTransitionTime":"2026-02-24T09:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.921057 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.933322 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.963151 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:41 crc kubenswrapper[4755]: I0224 09:56:41.983266 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.002501 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.011996 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.012116 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.012128 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.012151 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.012167 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:42Z","lastTransitionTime":"2026-02-24T09:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.020850 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.038738 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.057190 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.079505 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.114874 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.114932 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.114951 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.114974 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.115011 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:42Z","lastTransitionTime":"2026-02-24T09:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.219239 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.219311 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.219328 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.219356 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.219377 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:42Z","lastTransitionTime":"2026-02-24T09:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.323378 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.323451 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.323476 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.323507 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.323533 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:42Z","lastTransitionTime":"2026-02-24T09:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.427666 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.427735 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.427757 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.427786 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.427805 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:42Z","lastTransitionTime":"2026-02-24T09:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.530780 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.530868 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.530894 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.530929 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.530954 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:42Z","lastTransitionTime":"2026-02-24T09:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.633590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.633628 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.633637 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.633652 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.633660 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:42Z","lastTransitionTime":"2026-02-24T09:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.736539 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.736577 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.736587 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.736605 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.736614 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:42Z","lastTransitionTime":"2026-02-24T09:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.802145 4755 generic.go:334] "Generic (PLEG): container finished" podID="f842f5c8-ff09-48b2-9805-ad9de28e2ea7" containerID="5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133" exitCode=0 Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.802227 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" event={"ID":"f842f5c8-ff09-48b2-9805-ad9de28e2ea7","Type":"ContainerDied","Data":"5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133"} Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.835436 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.839263 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.839315 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.839334 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.839364 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.839386 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:42Z","lastTransitionTime":"2026-02-24T09:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.853880 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.868277 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.883907 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.899403 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.912279 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.922965 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.937730 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.943343 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.943395 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.943415 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.943439 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.943456 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:42Z","lastTransitionTime":"2026-02-24T09:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.950749 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.960630 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:42 crc kubenswrapper[4755]: I0224 09:56:42.989908 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.010268 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.028261 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.045499 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.049253 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.049300 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.049321 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.049348 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.049366 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:43Z","lastTransitionTime":"2026-02-24T09:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.064296 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.083438 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.151787 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.151837 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.151848 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.151867 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.151879 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:43Z","lastTransitionTime":"2026-02-24T09:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.254948 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.254987 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.255001 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.255021 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.255037 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:43Z","lastTransitionTime":"2026-02-24T09:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.315411 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.315448 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:43 crc kubenswrapper[4755]: E0224 09:56:43.315563 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.315616 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.315646 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:43 crc kubenswrapper[4755]: E0224 09:56:43.315700 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:43 crc kubenswrapper[4755]: E0224 09:56:43.315789 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:43 crc kubenswrapper[4755]: E0224 09:56:43.315953 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.357601 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.357666 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.357683 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.357709 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.357727 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:43Z","lastTransitionTime":"2026-02-24T09:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.461612 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.461676 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.461703 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.461731 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.461749 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:43Z","lastTransitionTime":"2026-02-24T09:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.566663 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.566715 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.566736 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.566765 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.566785 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:43Z","lastTransitionTime":"2026-02-24T09:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.671319 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.671391 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.671415 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.671445 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.671469 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:43Z","lastTransitionTime":"2026-02-24T09:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.774212 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.774268 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.774280 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.774306 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.774319 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:43Z","lastTransitionTime":"2026-02-24T09:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.812426 4755 generic.go:334] "Generic (PLEG): container finished" podID="f842f5c8-ff09-48b2-9805-ad9de28e2ea7" containerID="5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24" exitCode=0 Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.812497 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" event={"ID":"f842f5c8-ff09-48b2-9805-ad9de28e2ea7","Type":"ContainerDied","Data":"5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24"} Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.833667 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.850954 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.871874 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.877500 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.877572 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.877590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.877620 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.877644 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:43Z","lastTransitionTime":"2026-02-24T09:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.894296 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.913683 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.927685 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.959039 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.973934 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.980882 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.980952 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.980981 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.981015 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.981043 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:43Z","lastTransitionTime":"2026-02-24T09:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:43 crc kubenswrapper[4755]: I0224 09:56:43.989813 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.005485 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.020437 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.033318 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.047376 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.075226 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.083736 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.083796 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.083815 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.083844 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.083862 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:44Z","lastTransitionTime":"2026-02-24T09:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.092643 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.105519 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.186731 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.186772 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.186784 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.186802 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.186813 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:44Z","lastTransitionTime":"2026-02-24T09:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.289756 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.289818 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.289835 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.289860 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.289876 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:44Z","lastTransitionTime":"2026-02-24T09:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.393546 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.393615 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.393678 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.393705 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.393726 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:44Z","lastTransitionTime":"2026-02-24T09:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.497015 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.497142 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.497174 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.497205 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.497225 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:44Z","lastTransitionTime":"2026-02-24T09:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.600582 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.600636 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.600693 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.600718 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.600736 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:44Z","lastTransitionTime":"2026-02-24T09:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.704001 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.704060 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.704094 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.704111 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.704126 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:44Z","lastTransitionTime":"2026-02-24T09:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.806639 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.806700 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.806717 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.806743 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.806761 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:44Z","lastTransitionTime":"2026-02-24T09:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.826389 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" event={"ID":"f842f5c8-ff09-48b2-9805-ad9de28e2ea7","Type":"ContainerStarted","Data":"081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90"} Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.844574 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.857835 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.870123 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.880423 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.891210 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.903859 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.909733 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.909799 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.909820 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.909849 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.909868 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:44Z","lastTransitionTime":"2026-02-24T09:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.924487 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.942623 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.962044 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:44 crc kubenswrapper[4755]: I0224 09:56:44.985621 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.006843 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.013320 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.013387 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.013405 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.013433 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.013454 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:45Z","lastTransitionTime":"2026-02-24T09:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.027644 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.046659 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.059815 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.074781 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.103769 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.116858 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.117241 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.117278 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.117305 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.117322 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:45Z","lastTransitionTime":"2026-02-24T09:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.221544 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.221615 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.221640 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.221674 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.221699 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:45Z","lastTransitionTime":"2026-02-24T09:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.316341 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.316382 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:45 crc kubenswrapper[4755]: E0224 09:56:45.316490 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.316851 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.316846 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:45 crc kubenswrapper[4755]: E0224 09:56:45.316953 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:45 crc kubenswrapper[4755]: E0224 09:56:45.317024 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:45 crc kubenswrapper[4755]: E0224 09:56:45.317316 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.325813 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.325983 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.326051 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.326253 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.326273 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:45Z","lastTransitionTime":"2026-02-24T09:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.430799 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.430849 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.430877 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.430907 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.430929 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:45Z","lastTransitionTime":"2026-02-24T09:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.534443 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.534492 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.534504 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.534559 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.534574 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:45Z","lastTransitionTime":"2026-02-24T09:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.637006 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.637057 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.637144 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.637166 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.637178 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:45Z","lastTransitionTime":"2026-02-24T09:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.740274 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.740328 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.740345 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.740367 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.740385 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:45Z","lastTransitionTime":"2026-02-24T09:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.837451 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81"} Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.837496 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc"} Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.842733 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.842790 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.842798 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.842811 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.842821 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:45Z","lastTransitionTime":"2026-02-24T09:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.858628 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.870655 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.880870 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.896840 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.911625 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.926765 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.944872 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:45Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.945778 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.945831 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.945849 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.945873 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.945892 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:45Z","lastTransitionTime":"2026-02-24T09:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.959501 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:45Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.978691 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:45Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:45 crc kubenswrapper[4755]: I0224 09:56:45.999372 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:45Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.014415 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.032458 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.048607 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.048653 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.048671 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.048696 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.048712 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:46Z","lastTransitionTime":"2026-02-24T09:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.050244 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.068185 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.083449 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.094866 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.151508 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.151547 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.151555 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.151571 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.151580 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:46Z","lastTransitionTime":"2026-02-24T09:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.255058 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.255160 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.255185 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.255213 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.255230 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:46Z","lastTransitionTime":"2026-02-24T09:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.344239 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.359257 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.359335 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.359358 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.359386 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.359406 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:46Z","lastTransitionTime":"2026-02-24T09:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.360620 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.374802 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.392248 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.414979 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.430445 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.446578 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.461735 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.462191 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.462235 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.462246 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.462264 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.462275 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:46Z","lastTransitionTime":"2026-02-24T09:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.477880 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.497779 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.521109 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.540718 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.559708 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.565609 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.565641 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.565652 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.565668 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.565681 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:46Z","lastTransitionTime":"2026-02-24T09:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.585852 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.599029 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.609183 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.667907 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.667945 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.667956 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.667975 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.667987 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:46Z","lastTransitionTime":"2026-02-24T09:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.770486 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.770545 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.770562 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.770589 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.770606 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:46Z","lastTransitionTime":"2026-02-24T09:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.844848 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bxllg" event={"ID":"6ca669cb-3429-4187-bee6-232dbd316c67","Type":"ContainerStarted","Data":"8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117"} Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.862683 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.873942 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.874009 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.874039 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.874104 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.874129 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:46Z","lastTransitionTime":"2026-02-24T09:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.893279 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.906976 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.919518 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.934319 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.950902 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.966695 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.977922 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.978020 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.978039 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.978125 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.978146 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:46Z","lastTransitionTime":"2026-02-24T09:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:46 crc kubenswrapper[4755]: I0224 09:56:46.984629 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.005231 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.024157 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.036955 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.040802 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.040872 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.040897 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.040928 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.040955 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:47Z","lastTransitionTime":"2026-02-24T09:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.054452 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: E0224 09:56:47.059057 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.063603 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.063641 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.063658 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.063676 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.063690 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:47Z","lastTransitionTime":"2026-02-24T09:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.069180 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: E0224 09:56:47.078160 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.082263 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.082301 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.082312 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.082329 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.082341 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:47Z","lastTransitionTime":"2026-02-24T09:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.090975 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: E0224 09:56:47.098908 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.102529 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.102573 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.102585 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.102604 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.102616 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:47Z","lastTransitionTime":"2026-02-24T09:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.109108 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: E0224 09:56:47.113780 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.117491 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.117546 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.117563 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.117582 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.117600 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:47Z","lastTransitionTime":"2026-02-24T09:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.125341 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: E0224 09:56:47.131350 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: E0224 09:56:47.131519 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.133018 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.133149 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.133230 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.133298 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.133362 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:47Z","lastTransitionTime":"2026-02-24T09:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.235611 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.235798 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.235869 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.235933 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.235991 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:47Z","lastTransitionTime":"2026-02-24T09:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.315705 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:47 crc kubenswrapper[4755]: E0224 09:56:47.315846 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.316281 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:47 crc kubenswrapper[4755]: E0224 09:56:47.316356 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.316700 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:47 crc kubenswrapper[4755]: E0224 09:56:47.316891 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.316650 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:47 crc kubenswrapper[4755]: E0224 09:56:47.317115 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.340239 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.340290 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.340306 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.340329 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.340342 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:47Z","lastTransitionTime":"2026-02-24T09:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.442753 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.442790 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.442803 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.442823 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.442838 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:47Z","lastTransitionTime":"2026-02-24T09:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.545580 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.545611 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.545619 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.545631 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.545639 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:47Z","lastTransitionTime":"2026-02-24T09:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.647905 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.647967 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.648018 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.648041 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.648056 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:47Z","lastTransitionTime":"2026-02-24T09:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.750629 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.750672 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.750685 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.750704 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.750718 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:47Z","lastTransitionTime":"2026-02-24T09:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.851549 4755 generic.go:334] "Generic (PLEG): container finished" podID="787109ef-edb9-4334-afc7-6197f57f444f" containerID="c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584" exitCode=0 Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.851670 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerDied","Data":"c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.852184 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.852428 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.852445 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.852467 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.852482 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:47Z","lastTransitionTime":"2026-02-24T09:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.866333 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.878619 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.901458 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.915853 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.935885 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.954673 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.955841 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.955885 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.955897 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.955915 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.955927 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:47Z","lastTransitionTime":"2026-02-24T09:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.971868 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:47 crc kubenswrapper[4755]: I0224 09:56:47.986314 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:47Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.006952 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.019882 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.033503 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.057511 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.071147 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.098411 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.101043 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.101158 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.101186 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.101227 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.101253 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:48Z","lastTransitionTime":"2026-02-24T09:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.114808 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.128642 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.205735 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.206185 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.206208 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.206230 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.206244 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:48Z","lastTransitionTime":"2026-02-24T09:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.309205 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.309241 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.309252 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.309270 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.309282 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:48Z","lastTransitionTime":"2026-02-24T09:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.412599 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.412632 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.412641 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.412657 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.412666 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:48Z","lastTransitionTime":"2026-02-24T09:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.514699 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.514766 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.514784 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.514809 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.514827 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:48Z","lastTransitionTime":"2026-02-24T09:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.618149 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.618209 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.618227 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.618251 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.618272 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:48Z","lastTransitionTime":"2026-02-24T09:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.721605 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.721680 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.721702 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.721727 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.721745 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:48Z","lastTransitionTime":"2026-02-24T09:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.824464 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.824545 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.824574 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.824606 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.824632 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:48Z","lastTransitionTime":"2026-02-24T09:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.859295 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dwm6v" event={"ID":"79ca0953-3a40-45a2-9305-02272f036006","Type":"ContainerStarted","Data":"66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.865480 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerStarted","Data":"cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.865551 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerStarted","Data":"462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.865573 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerStarted","Data":"635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.865594 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerStarted","Data":"b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.865612 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerStarted","Data":"87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.865633 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerStarted","Data":"a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.883752 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.904253 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.923121 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.927770 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.927814 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.927826 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.927846 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.927858 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:48Z","lastTransitionTime":"2026-02-24T09:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.950269 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.964334 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.975379 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:48 crc kubenswrapper[4755]: I0224 09:56:48.995923 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:48Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.009333 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.022817 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.031308 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.031379 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.031398 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.031444 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.031461 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:49Z","lastTransitionTime":"2026-02-24T09:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.036786 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.050050 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.063456 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.077125 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.103975 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.120734 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.134533 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.134630 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.134641 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.134659 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.134672 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:49Z","lastTransitionTime":"2026-02-24T09:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.134803 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.238049 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.238196 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.238225 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.238263 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.238295 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:49Z","lastTransitionTime":"2026-02-24T09:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.315668 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.315795 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.315795 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:49 crc kubenswrapper[4755]: E0224 09:56:49.315888 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.315843 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:49 crc kubenswrapper[4755]: E0224 09:56:49.316708 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:49 crc kubenswrapper[4755]: E0224 09:56:49.317142 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:49 crc kubenswrapper[4755]: E0224 09:56:49.317225 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.341230 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.341271 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.341281 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.341299 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.341312 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:49Z","lastTransitionTime":"2026-02-24T09:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.443932 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.443967 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.443978 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.443995 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.444005 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:49Z","lastTransitionTime":"2026-02-24T09:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.546366 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.546416 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.546428 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.546447 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.546460 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:49Z","lastTransitionTime":"2026-02-24T09:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.649000 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.649087 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.649105 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.649132 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.649146 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:49Z","lastTransitionTime":"2026-02-24T09:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.752443 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.752514 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.752533 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.752557 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.752578 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:49Z","lastTransitionTime":"2026-02-24T09:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.855635 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.855674 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.855686 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.855702 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.855715 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:49Z","lastTransitionTime":"2026-02-24T09:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.876114 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2cmfc" event={"ID":"04c132ba-c396-4f64-a02b-fcdae681ed74","Type":"ContainerStarted","Data":"369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168"} Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.877969 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3"} Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.879912 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba"} Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.904542 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.922943 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.936487 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.954615 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.959198 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.959555 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.959629 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.959766 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.959838 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:49Z","lastTransitionTime":"2026-02-24T09:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.974754 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:49 crc kubenswrapper[4755]: I0224 09:56:49.990520 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.004155 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.023557 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.037786 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.052443 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.062715 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.062781 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.062807 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.062851 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.062869 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:50Z","lastTransitionTime":"2026-02-24T09:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.086802 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.108060 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.127488 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.156601 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.166886 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.166928 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.166942 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.166963 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.166975 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:50Z","lastTransitionTime":"2026-02-24T09:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.199531 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.220112 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.234851 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.251904 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.267980 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.269829 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.269890 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.269909 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.269934 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.269952 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:50Z","lastTransitionTime":"2026-02-24T09:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.284100 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.303874 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.351476 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.371655 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.372479 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.372540 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.372555 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.372575 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.372587 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:50Z","lastTransitionTime":"2026-02-24T09:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.391220 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.407199 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.419853 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.434525 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.452188 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.469545 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.475316 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.475359 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.475373 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.475399 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.475413 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:50Z","lastTransitionTime":"2026-02-24T09:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.483424 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.495551 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.516229 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:50Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.577406 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.577446 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.577456 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.577478 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.577490 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:50Z","lastTransitionTime":"2026-02-24T09:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.680008 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.680250 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.680319 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.680778 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.681171 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:50Z","lastTransitionTime":"2026-02-24T09:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.784461 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.784496 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.784506 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.784520 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.784530 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:50Z","lastTransitionTime":"2026-02-24T09:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.886089 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.886121 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.886131 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.886144 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.886153 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:50Z","lastTransitionTime":"2026-02-24T09:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.887316 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerStarted","Data":"91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794"} Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.988448 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.988690 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.988699 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.988712 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:50 crc kubenswrapper[4755]: I0224 09:56:50.988722 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:50Z","lastTransitionTime":"2026-02-24T09:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.091748 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.091792 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.091804 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.091822 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.091834 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:51Z","lastTransitionTime":"2026-02-24T09:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.194955 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.195014 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.195026 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.195046 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.195079 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:51Z","lastTransitionTime":"2026-02-24T09:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.297198 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.297240 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.297250 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.297265 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.297274 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:51Z","lastTransitionTime":"2026-02-24T09:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.315860 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:51 crc kubenswrapper[4755]: E0224 09:56:51.316027 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.316481 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:51 crc kubenswrapper[4755]: E0224 09:56:51.316547 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.316595 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:51 crc kubenswrapper[4755]: E0224 09:56:51.316648 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.316696 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:51 crc kubenswrapper[4755]: E0224 09:56:51.316752 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.399788 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.399831 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.399841 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.399857 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.399868 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:51Z","lastTransitionTime":"2026-02-24T09:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.503516 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.503576 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.503595 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.503618 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.503636 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:51Z","lastTransitionTime":"2026-02-24T09:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.607265 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.607317 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.607328 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.607362 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.607372 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:51Z","lastTransitionTime":"2026-02-24T09:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.710391 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.710431 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.710512 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.710532 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.710545 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:51Z","lastTransitionTime":"2026-02-24T09:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.813819 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.813876 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.813891 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.813914 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.813924 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:51Z","lastTransitionTime":"2026-02-24T09:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.917624 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.917694 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.917713 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.917740 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:51 crc kubenswrapper[4755]: I0224 09:56:51.917759 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:51Z","lastTransitionTime":"2026-02-24T09:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.021332 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.021372 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.021383 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.021404 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.021417 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:52Z","lastTransitionTime":"2026-02-24T09:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.124515 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.124561 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.124573 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.124590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.124602 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:52Z","lastTransitionTime":"2026-02-24T09:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.226912 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.226980 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.227016 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.227048 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.227105 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:52Z","lastTransitionTime":"2026-02-24T09:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.332808 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.332882 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.332903 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.332933 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.332956 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:52Z","lastTransitionTime":"2026-02-24T09:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.436477 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.436527 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.436541 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.436561 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.436573 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:52Z","lastTransitionTime":"2026-02-24T09:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.540426 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.540712 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.540795 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.540922 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.541007 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:52Z","lastTransitionTime":"2026-02-24T09:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.644461 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.644519 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.644529 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.644548 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.644564 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:52Z","lastTransitionTime":"2026-02-24T09:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.763765 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.763904 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.763923 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.763946 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.763960 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:52Z","lastTransitionTime":"2026-02-24T09:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.866897 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.867258 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.867272 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.867292 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.867305 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:52Z","lastTransitionTime":"2026-02-24T09:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.971767 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.971817 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.971834 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.971879 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:52 crc kubenswrapper[4755]: I0224 09:56:52.971895 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:52Z","lastTransitionTime":"2026-02-24T09:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.074355 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.074410 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.074427 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.074453 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.074469 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:53Z","lastTransitionTime":"2026-02-24T09:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.177168 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.177221 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.177235 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.177254 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.177266 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:53Z","lastTransitionTime":"2026-02-24T09:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.206364 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.206572 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.206652 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.206699 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.206735 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:57:25.206697345 +0000 UTC m=+149.663219898 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.206830 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.206885 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.206947 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.206966 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.206850 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.207037 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:57:25.206989494 +0000 UTC m=+149.663512077 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.207115 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:57:25.207060456 +0000 UTC m=+149.663583139 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.207165 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:57:25.207146309 +0000 UTC m=+149.663668972 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.280230 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.280289 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.280308 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.280517 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.280534 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:53Z","lastTransitionTime":"2026-02-24T09:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.307381 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.307465 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.307675 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.307713 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.307732 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.307769 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.307805 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:57:25.307783853 +0000 UTC m=+149.764306426 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.307881 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs podName:82775556-3991-45ab-ac50-7ef81cafeaee nodeName:}" failed. No retries permitted until 2026-02-24 09:57:25.307849645 +0000 UTC m=+149.764372218 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs") pod "network-metrics-daemon-98t22" (UID: "82775556-3991-45ab-ac50-7ef81cafeaee") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.316318 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.316394 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.316424 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.316498 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.316524 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.316639 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.316584 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:53 crc kubenswrapper[4755]: E0224 09:56:53.316890 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.383662 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.383723 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.383751 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.383778 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.383796 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:53Z","lastTransitionTime":"2026-02-24T09:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.487250 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.487305 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.487501 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.487525 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.487544 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:53Z","lastTransitionTime":"2026-02-24T09:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.591008 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.591132 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.591153 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.591178 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.591210 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:53Z","lastTransitionTime":"2026-02-24T09:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.693854 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.693945 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.693969 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.694000 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.694021 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:53Z","lastTransitionTime":"2026-02-24T09:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.796983 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.797050 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.797096 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.797123 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.797140 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:53Z","lastTransitionTime":"2026-02-24T09:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.899616 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.899688 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.899705 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.899739 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.899759 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:53Z","lastTransitionTime":"2026-02-24T09:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.903461 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerStarted","Data":"8327e61e65e36ad363b82579c7a0d892755b90e5f9c120d84192a28fdd94fc5c"} Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.903906 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.903992 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.904024 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.925896 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:53Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.940175 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.943832 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.947245 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:53Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.968289 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:53Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:53 crc kubenswrapper[4755]: I0224 09:56:53.984016 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:53Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.003612 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.004051 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.004669 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.004926 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.005169 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.005348 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:54Z","lastTransitionTime":"2026-02-24T09:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.023709 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.039238 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.061342 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8327e61e65e36ad363b82579c7a0d892755b90e5f9c120d84192a28fdd94fc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.081819 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.101939 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.108618 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.108665 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.108678 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.108699 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.108717 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:54Z","lastTransitionTime":"2026-02-24T09:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.122756 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.142187 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.167030 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.208124 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.212952 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.213002 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.213014 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.213033 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.213048 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:54Z","lastTransitionTime":"2026-02-24T09:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.229372 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.247190 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.271541 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8327e61e65e36ad363b82579c7a0d892755b90e5f9c120d84192a28fdd94fc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.288865 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.304767 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.316281 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.316358 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.316384 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.316413 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.316436 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:54Z","lastTransitionTime":"2026-02-24T09:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.321551 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.339110 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.354993 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.372311 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.395408 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.417325 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.419345 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.419384 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.419396 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.419416 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.419429 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:54Z","lastTransitionTime":"2026-02-24T09:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.449992 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.470166 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.495459 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.517359 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.522568 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.522606 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.522617 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.522636 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.522648 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:54Z","lastTransitionTime":"2026-02-24T09:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.533162 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.552468 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.588001 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:54Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.625254 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.625298 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.625310 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.625326 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.625338 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:54Z","lastTransitionTime":"2026-02-24T09:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.728562 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.728607 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.728619 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.728635 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.728646 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:54Z","lastTransitionTime":"2026-02-24T09:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.831387 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.831436 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.831448 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.831466 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.831482 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:54Z","lastTransitionTime":"2026-02-24T09:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.933225 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.933259 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.933267 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.933280 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:54 crc kubenswrapper[4755]: I0224 09:56:54.933291 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:54Z","lastTransitionTime":"2026-02-24T09:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.035042 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.035120 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.035133 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.035153 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.035166 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:55Z","lastTransitionTime":"2026-02-24T09:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.138587 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.138645 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.138657 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.138679 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.138699 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:55Z","lastTransitionTime":"2026-02-24T09:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.241731 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.242137 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.242154 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.242179 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.242197 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:55Z","lastTransitionTime":"2026-02-24T09:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.316628 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.316695 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.316696 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.317122 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:55 crc kubenswrapper[4755]: E0224 09:56:55.317236 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:55 crc kubenswrapper[4755]: E0224 09:56:55.317359 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.317623 4755 scope.go:117] "RemoveContainer" containerID="efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6" Feb 24 09:56:55 crc kubenswrapper[4755]: E0224 09:56:55.317769 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:55 crc kubenswrapper[4755]: E0224 09:56:55.317875 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.356212 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.356278 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.356296 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.356326 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.356344 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:55Z","lastTransitionTime":"2026-02-24T09:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.459647 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.459699 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.459713 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.459731 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.459744 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:55Z","lastTransitionTime":"2026-02-24T09:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.562783 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.562852 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.562868 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.562917 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.562938 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:55Z","lastTransitionTime":"2026-02-24T09:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.666214 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.666273 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.666290 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.666317 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.666337 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:55Z","lastTransitionTime":"2026-02-24T09:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.769599 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.769646 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.769664 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.769687 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.769705 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:55Z","lastTransitionTime":"2026-02-24T09:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.873151 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.873202 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.873218 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.873240 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.873256 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:55Z","lastTransitionTime":"2026-02-24T09:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.913823 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.917174 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1"} Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.918050 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.922394 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/0.log" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.927129 4755 generic.go:334] "Generic (PLEG): container finished" podID="787109ef-edb9-4334-afc7-6197f57f444f" containerID="8327e61e65e36ad363b82579c7a0d892755b90e5f9c120d84192a28fdd94fc5c" exitCode=1 Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.927190 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerDied","Data":"8327e61e65e36ad363b82579c7a0d892755b90e5f9c120d84192a28fdd94fc5c"} Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.928369 4755 scope.go:117] "RemoveContainer" containerID="8327e61e65e36ad363b82579c7a0d892755b90e5f9c120d84192a28fdd94fc5c" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.948321 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:55Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.968753 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:55Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.976417 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.976481 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.976493 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.976513 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.976531 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:55Z","lastTransitionTime":"2026-02-24T09:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:55 crc kubenswrapper[4755]: I0224 09:56:55.989610 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:55Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.008782 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.037293 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.058149 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.076766 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.078936 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.078981 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.078995 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.079013 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.079027 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:56Z","lastTransitionTime":"2026-02-24T09:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.099887 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.116314 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.138156 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.153586 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.177705 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.182423 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.182469 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.182484 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.182507 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.182523 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:56Z","lastTransitionTime":"2026-02-24T09:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.192454 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.204240 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.224224 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8327e61e65e36ad363b82579c7a0d892755b90e5f9c120d84192a28fdd94fc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.242792 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.256786 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.274562 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: E0224 09:56:56.283666 4755 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.295780 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.311186 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.331319 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.349172 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.361717 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.385609 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8327e61e65e36ad363b82579c7a0d892755b90e5f9c120d84192a28fdd94fc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8327e61e65e36ad363b82579c7a0d892755b90e5f9c120d84192a28fdd94fc5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:56:55Z\\\",\\\"message\\\":\\\"ers/factory.go:160\\\\nI0224 09:56:55.374003 6534 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:56:55.374292 6534 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:56:55.374567 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:56:55.374898 6534 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:56:55.375017 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 09:56:55.375043 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 09:56:55.375095 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 09:56:55.375136 6534 factory.go:656] Stopping watch factory\\\\nI0224 09:56:55.375161 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 09:56:55.375180 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:56:55.375201 6534 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.406793 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.441286 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: E0224 09:56:56.451743 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.461160 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.498312 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.524297 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.543518 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.562983 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.587867 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.604643 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.622116 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.634760 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.646313 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.661359 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.673766 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.696001 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.724272 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8327e61e65e36ad363b82579c7a0d892755b90e5f9c120d84192a28fdd94fc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8327e61e65e36ad363b82579c7a0d892755b90e5f9c120d84192a28fdd94fc5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:56:55Z\\\",\\\"message\\\":\\\"ers/factory.go:160\\\\nI0224 09:56:55.374003 6534 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:56:55.374292 6534 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:56:55.374567 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:56:55.374898 6534 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:56:55.375017 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 09:56:55.375043 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 09:56:55.375095 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 09:56:55.375136 6534 factory.go:656] Stopping watch factory\\\\nI0224 09:56:55.375161 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 09:56:55.375180 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:56:55.375201 6534 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.743761 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.763407 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.775934 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.790321 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.806507 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.825038 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.841171 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.857932 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.934696 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/0.log" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.937786 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerStarted","Data":"4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d"} Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.951750 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.966498 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.980538 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:56 crc kubenswrapper[4755]: I0224 09:56:56.990893 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.002204 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.014228 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.044313 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8327e61e65e36ad363b82579c7a0d892755b90e5f9c120d84192a28fdd94fc5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:56:55Z\\\",\\\"message\\\":\\\"ers/factory.go:160\\\\nI0224 09:56:55.374003 6534 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:56:55.374292 6534 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:56:55.374567 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:56:55.374898 6534 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:56:55.375017 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 09:56:55.375043 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 09:56:55.375095 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 09:56:55.375136 6534 factory.go:656] Stopping watch factory\\\\nI0224 09:56:55.375161 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 09:56:55.375180 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:56:55.375201 6534 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.057335 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.073042 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.087813 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.101225 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.118013 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.140951 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.156042 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.170422 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.195733 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.316195 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.316284 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.316274 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:57 crc kubenswrapper[4755]: E0224 09:56:57.316387 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.316317 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:57 crc kubenswrapper[4755]: E0224 09:56:57.316599 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:56:57 crc kubenswrapper[4755]: E0224 09:56:57.316872 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:57 crc kubenswrapper[4755]: E0224 09:56:57.317440 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.330131 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.375523 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.375604 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.375629 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.375659 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.375681 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:57Z","lastTransitionTime":"2026-02-24T09:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:57 crc kubenswrapper[4755]: E0224 09:56:57.395884 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.401802 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.401861 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.401880 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.401904 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.401925 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:57Z","lastTransitionTime":"2026-02-24T09:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:57 crc kubenswrapper[4755]: E0224 09:56:57.423697 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.429384 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.429442 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.429461 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.429487 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.429504 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:57Z","lastTransitionTime":"2026-02-24T09:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:57 crc kubenswrapper[4755]: E0224 09:56:57.443916 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.449348 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.449402 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.449419 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.449441 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.449459 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:57Z","lastTransitionTime":"2026-02-24T09:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:57 crc kubenswrapper[4755]: E0224 09:56:57.470192 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.476210 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.476263 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.476280 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.476303 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.476320 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:56:57Z","lastTransitionTime":"2026-02-24T09:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:56:57 crc kubenswrapper[4755]: E0224 09:56:57.498460 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: E0224 09:56:57.498700 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.944734 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/1.log" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.946482 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/0.log" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.950358 4755 generic.go:334] "Generic (PLEG): container finished" podID="787109ef-edb9-4334-afc7-6197f57f444f" containerID="4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d" exitCode=1 Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.950479 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerDied","Data":"4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d"} Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.950583 4755 scope.go:117] "RemoveContainer" containerID="8327e61e65e36ad363b82579c7a0d892755b90e5f9c120d84192a28fdd94fc5c" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.952325 4755 scope.go:117] "RemoveContainer" containerID="4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d" Feb 24 09:56:57 crc kubenswrapper[4755]: E0224 09:56:57.952839 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.976772 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:57 crc kubenswrapper[4755]: I0224 09:56:57.999608 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8327e61e65e36ad363b82579c7a0d892755b90e5f9c120d84192a28fdd94fc5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:56:55Z\\\",\\\"message\\\":\\\"ers/factory.go:160\\\\nI0224 09:56:55.374003 6534 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:56:55.374292 6534 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:56:55.374567 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:56:55.374898 6534 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 09:56:55.375017 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 09:56:55.375043 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 09:56:55.375095 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 09:56:55.375136 6534 factory.go:656] Stopping watch factory\\\\nI0224 09:56:55.375161 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 09:56:55.375180 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 09:56:55.375201 6534 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:56:56Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 09:56:56.931045 6675 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0224 09:56:56.931117 6675 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0224 09:56:56.931145 6675 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0224 09:56:56.931239 6675 factory.go:1336] Added *v1.Node event handler 7\\\\nI0224 09:56:56.931308 6675 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0224 09:56:56.931714 6675 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0224 09:56:56.931831 6675 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0224 09:56:56.931875 6675 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:56:56.931918 6675 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:56:56.932604 6675 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:57Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.016961 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.033478 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.049230 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.065441 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.083445 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.102463 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.118304 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.134291 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.148477 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.173973 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.189330 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.203948 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.220433 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.232768 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.245678 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.957146 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/1.log" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.963144 4755 scope.go:117] "RemoveContainer" containerID="4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d" Feb 24 09:56:58 crc kubenswrapper[4755]: E0224 09:56:58.963465 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.977459 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:58 crc kubenswrapper[4755]: I0224 09:56:58.995322 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:58Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.007157 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.017873 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.030654 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.040255 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.063278 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:56:56Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 09:56:56.931045 6675 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0224 09:56:56.931117 6675 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0224 09:56:56.931145 6675 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0224 09:56:56.931239 6675 factory.go:1336] Added *v1.Node event handler 7\\\\nI0224 09:56:56.931308 6675 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0224 09:56:56.931714 6675 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0224 09:56:56.931831 6675 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0224 09:56:56.931875 6675 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:56:56.931918 6675 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:56:56.932604 6675 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.077093 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.090363 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.111418 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.132889 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.152405 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.175548 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.193644 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.209214 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.221925 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.243902 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:56:59Z is after 2025-08-24T17:21:41Z" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.315800 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.315800 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.315901 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:56:59 crc kubenswrapper[4755]: I0224 09:56:59.315983 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:56:59 crc kubenswrapper[4755]: E0224 09:56:59.316123 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:56:59 crc kubenswrapper[4755]: E0224 09:56:59.316246 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:56:59 crc kubenswrapper[4755]: E0224 09:56:59.316367 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:56:59 crc kubenswrapper[4755]: E0224 09:56:59.316560 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:01 crc kubenswrapper[4755]: I0224 09:57:01.315777 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:01 crc kubenswrapper[4755]: I0224 09:57:01.315850 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:01 crc kubenswrapper[4755]: I0224 09:57:01.315815 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:01 crc kubenswrapper[4755]: I0224 09:57:01.315783 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:01 crc kubenswrapper[4755]: E0224 09:57:01.315934 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:01 crc kubenswrapper[4755]: E0224 09:57:01.316051 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:01 crc kubenswrapper[4755]: E0224 09:57:01.316194 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:01 crc kubenswrapper[4755]: E0224 09:57:01.316316 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:01 crc kubenswrapper[4755]: E0224 09:57:01.453475 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:57:03 crc kubenswrapper[4755]: I0224 09:57:03.316289 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:03 crc kubenswrapper[4755]: I0224 09:57:03.316373 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:03 crc kubenswrapper[4755]: I0224 09:57:03.316507 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:03 crc kubenswrapper[4755]: E0224 09:57:03.316515 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:03 crc kubenswrapper[4755]: E0224 09:57:03.316663 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:03 crc kubenswrapper[4755]: I0224 09:57:03.316724 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:03 crc kubenswrapper[4755]: E0224 09:57:03.316870 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:03 crc kubenswrapper[4755]: E0224 09:57:03.316924 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:05 crc kubenswrapper[4755]: I0224 09:57:05.316107 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:05 crc kubenswrapper[4755]: I0224 09:57:05.316150 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:05 crc kubenswrapper[4755]: E0224 09:57:05.316350 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:05 crc kubenswrapper[4755]: I0224 09:57:05.316722 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:05 crc kubenswrapper[4755]: E0224 09:57:05.316865 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:05 crc kubenswrapper[4755]: I0224 09:57:05.317119 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:05 crc kubenswrapper[4755]: E0224 09:57:05.317240 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:05 crc kubenswrapper[4755]: E0224 09:57:05.317546 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.339121 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.363635 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.384754 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.409593 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.430795 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: E0224 09:57:06.454730 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.458709 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.485487 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.504791 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.523111 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.540790 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.561437 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.577514 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.593997 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.614444 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.631216 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.648286 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:06 crc kubenswrapper[4755]: I0224 09:57:06.677165 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:56:56Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 09:56:56.931045 6675 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0224 09:56:56.931117 6675 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0224 09:56:56.931145 6675 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0224 09:56:56.931239 6675 factory.go:1336] Added *v1.Node event handler 7\\\\nI0224 09:56:56.931308 6675 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0224 09:56:56.931714 6675 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0224 09:56:56.931831 6675 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0224 09:56:56.931875 6675 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:56:56.931918 6675 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:56:56.932604 6675 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:06Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.315963 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.316016 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.316009 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:07 crc kubenswrapper[4755]: E0224 09:57:07.316747 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.316155 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:07 crc kubenswrapper[4755]: E0224 09:57:07.316950 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:07 crc kubenswrapper[4755]: E0224 09:57:07.316549 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:07 crc kubenswrapper[4755]: E0224 09:57:07.317146 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.752050 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.752202 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.752227 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.752261 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.752285 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:07Z","lastTransitionTime":"2026-02-24T09:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:07 crc kubenswrapper[4755]: E0224 09:57:07.768684 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:07Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.775053 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.775209 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.775235 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.775265 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.775285 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:07Z","lastTransitionTime":"2026-02-24T09:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:07 crc kubenswrapper[4755]: E0224 09:57:07.797802 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:07Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.803642 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.803711 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.803734 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.803764 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.803787 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:07Z","lastTransitionTime":"2026-02-24T09:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:07 crc kubenswrapper[4755]: E0224 09:57:07.823128 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:07Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.829183 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.829233 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.829253 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.829280 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.829301 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:07Z","lastTransitionTime":"2026-02-24T09:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:07 crc kubenswrapper[4755]: E0224 09:57:07.847750 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:07Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.854039 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.854125 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.854147 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.854171 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:07 crc kubenswrapper[4755]: I0224 09:57:07.854190 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:07Z","lastTransitionTime":"2026-02-24T09:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:07 crc kubenswrapper[4755]: E0224 09:57:07.874608 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:07Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:07 crc kubenswrapper[4755]: E0224 09:57:07.874764 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:57:09 crc kubenswrapper[4755]: I0224 09:57:09.315791 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:09 crc kubenswrapper[4755]: I0224 09:57:09.315791 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:09 crc kubenswrapper[4755]: E0224 09:57:09.316503 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:09 crc kubenswrapper[4755]: I0224 09:57:09.315879 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:09 crc kubenswrapper[4755]: E0224 09:57:09.316621 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:09 crc kubenswrapper[4755]: I0224 09:57:09.315800 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:09 crc kubenswrapper[4755]: E0224 09:57:09.316332 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:09 crc kubenswrapper[4755]: E0224 09:57:09.316715 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.513355 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.551001 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.573491 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.592707 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.612401 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.632583 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.655395 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.669595 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.686328 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.702187 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.719674 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.756086 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:56:56Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 09:56:56.931045 6675 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0224 09:56:56.931117 6675 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0224 09:56:56.931145 6675 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0224 09:56:56.931239 6675 factory.go:1336] Added *v1.Node event handler 7\\\\nI0224 09:56:56.931308 6675 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0224 09:56:56.931714 6675 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0224 09:56:56.931831 6675 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0224 09:56:56.931875 6675 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:56:56.931918 6675 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:56:56.932604 6675 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.777181 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.797001 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.816890 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.838733 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.856702 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:10 crc kubenswrapper[4755]: I0224 09:57:10.880714 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:10Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:11 crc kubenswrapper[4755]: I0224 09:57:11.315826 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:11 crc kubenswrapper[4755]: I0224 09:57:11.315882 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:11 crc kubenswrapper[4755]: I0224 09:57:11.315832 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:11 crc kubenswrapper[4755]: I0224 09:57:11.315992 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:11 crc kubenswrapper[4755]: E0224 09:57:11.316240 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:11 crc kubenswrapper[4755]: E0224 09:57:11.316439 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:11 crc kubenswrapper[4755]: E0224 09:57:11.316523 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:11 crc kubenswrapper[4755]: E0224 09:57:11.316772 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:11 crc kubenswrapper[4755]: E0224 09:57:11.456357 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:57:13 crc kubenswrapper[4755]: I0224 09:57:13.315625 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:13 crc kubenswrapper[4755]: I0224 09:57:13.315710 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:13 crc kubenswrapper[4755]: E0224 09:57:13.315810 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:13 crc kubenswrapper[4755]: I0224 09:57:13.315859 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:13 crc kubenswrapper[4755]: I0224 09:57:13.315906 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:13 crc kubenswrapper[4755]: E0224 09:57:13.316070 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:13 crc kubenswrapper[4755]: E0224 09:57:13.316443 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:13 crc kubenswrapper[4755]: E0224 09:57:13.316547 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:13 crc kubenswrapper[4755]: I0224 09:57:13.317532 4755 scope.go:117] "RemoveContainer" containerID="4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d" Feb 24 09:57:13 crc kubenswrapper[4755]: I0224 09:57:13.336240 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.063839 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/1.log" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.067149 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerStarted","Data":"a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20"} Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.067764 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.084241 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc842a89-ef27-4774-83e7-cd82d2be82d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e84672e8d77bad12c4a89084f19b7bab0dd4c0caaaa038c165095b2d509910b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7da773bd75b944d3f90cf4f68330fccbbfceb6441356b2561b48f2d7d90a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c839041bf4a2cd96fe63f169ec201656be8f3343fb5398ecce2c2c0848e9987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.096693 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.114835 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.128989 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.140615 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.156624 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.167511 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.182598 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.204555 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:56:56Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 09:56:56.931045 6675 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0224 09:56:56.931117 6675 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0224 09:56:56.931145 6675 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0224 09:56:56.931239 6675 factory.go:1336] Added *v1.Node event handler 7\\\\nI0224 09:56:56.931308 6675 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0224 09:56:56.931714 6675 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0224 09:56:56.931831 6675 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0224 09:56:56.931875 6675 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:56:56.931918 6675 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:56:56.932604 6675 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.222686 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.244232 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.261915 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.284559 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.304622 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.329464 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.367707 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.386884 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:14 crc kubenswrapper[4755]: I0224 09:57:14.399451 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:14Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.075669 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/2.log" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.076833 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/1.log" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.081607 4755 generic.go:334] "Generic (PLEG): container finished" podID="787109ef-edb9-4334-afc7-6197f57f444f" containerID="a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20" exitCode=1 Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.081668 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerDied","Data":"a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20"} Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.081738 4755 scope.go:117] "RemoveContainer" containerID="4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.082649 4755 scope.go:117] "RemoveContainer" containerID="a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20" Feb 24 09:57:15 crc kubenswrapper[4755]: E0224 09:57:15.082888 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.119109 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.141284 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.159861 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.180155 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc842a89-ef27-4774-83e7-cd82d2be82d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e84672e8d77bad12c4a89084f19b7bab0dd4c0caaaa038c165095b2d509910b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7da773bd75b944d3f90cf4f68330fccbbfceb6441356b2561b48f2d7d90a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c839041bf4a2cd96fe63f169ec201656be8f3343fb5398ecce2c2c0848e9987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.200180 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.220156 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.237940 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.254446 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.275563 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.293694 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.310218 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.315411 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.315484 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.315442 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.315586 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:15 crc kubenswrapper[4755]: E0224 09:57:15.315737 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:15 crc kubenswrapper[4755]: E0224 09:57:15.315939 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:15 crc kubenswrapper[4755]: E0224 09:57:15.316133 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:15 crc kubenswrapper[4755]: E0224 09:57:15.316303 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.344318 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2a210b563b36e3aedef4cedde633472e4d9a6aadc8ee87f6058eeb29c3416d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:56:56Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 09:56:56.931045 6675 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0224 09:56:56.931117 6675 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0224 09:56:56.931145 6675 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0224 09:56:56.931239 6675 factory.go:1336] Added *v1.Node event handler 7\\\\nI0224 09:56:56.931308 6675 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0224 09:56:56.931714 6675 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0224 09:56:56.931831 6675 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0224 09:56:56.931875 6675 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:56:56.931918 6675 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:56:56.932604 6675 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:14Z\\\",\\\"message\\\":\\\"60\\\\nI0224 09:57:14.343342 6876 factory.go:656] Stopping watch factory\\\\nI0224 09:57:14.343363 6876 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:57:14.343364 6876 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:57:14.343402 6876 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:57:14.343456 6876 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.343123 6876 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd/etcd for endpointslice openshift-etcd/etcd-4gsrx as it is not a known egress service\\\\nI0224 09:57:14.343744 6876 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 09:57:14.343810 6876 egressservice_zone_node.go:110] Processing sync for Egress Service node crc\\\\nI0224 09:57:14.343848 6876 egressservice_zone_node.go:113] Finished syncing Egress Service node crc: 42.411µs\\\\nI0224 09:57:14.343932 6876 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.344465 6876 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:57:14.344511 6876 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.371140 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.396273 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.415695 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.432978 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.453909 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:15 crc kubenswrapper[4755]: I0224 09:57:15.474633 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:15Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.091054 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/2.log" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.097377 4755 scope.go:117] "RemoveContainer" containerID="a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20" Feb 24 09:57:16 crc kubenswrapper[4755]: E0224 09:57:16.097648 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.120426 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc842a89-ef27-4774-83e7-cd82d2be82d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e84672e8d77bad12c4a89084f19b7bab0dd4c0caaaa038c165095b2d509910b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7da773bd75b944d3f90cf4f68330fccbbfceb6441356b2561b48f2d7d90a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c839041bf4a2cd96fe63f169ec201656be8f3343fb5398ecce2c2c0848e9987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.142108 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.165765 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.188372 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.208230 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.229977 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.247434 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.264789 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.300520 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:14Z\\\",\\\"message\\\":\\\"60\\\\nI0224 09:57:14.343342 6876 factory.go:656] Stopping watch factory\\\\nI0224 09:57:14.343363 6876 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:57:14.343364 6876 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:57:14.343402 6876 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:57:14.343456 6876 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.343123 6876 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd/etcd for endpointslice openshift-etcd/etcd-4gsrx as it is not a known egress service\\\\nI0224 09:57:14.343744 6876 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 09:57:14.343810 6876 egressservice_zone_node.go:110] Processing sync for Egress Service node crc\\\\nI0224 09:57:14.343848 6876 egressservice_zone_node.go:113] Finished syncing Egress Service node crc: 42.411µs\\\\nI0224 09:57:14.343932 6876 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.344465 6876 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:57:14.344511 6876 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:57:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.321210 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.341697 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.361460 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.383146 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.404212 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.428821 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: E0224 09:57:16.457303 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.476575 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.500487 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.518805 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.539862 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.574925 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.597535 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.618760 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.639252 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.658787 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.677106 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.698164 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc842a89-ef27-4774-83e7-cd82d2be82d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e84672e8d77bad12c4a89084f19b7bab0dd4c0caaaa038c165095b2d509910b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7da773bd75b944d3f90cf4f68330fccbbfceb6441356b2561b48f2d7d90a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c839041bf4a2cd96fe63f169ec201656be8f3343fb5398ecce2c2c0848e9987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.714928 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.768719 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:14Z\\\",\\\"message\\\":\\\"60\\\\nI0224 09:57:14.343342 6876 factory.go:656] Stopping watch factory\\\\nI0224 09:57:14.343363 6876 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:57:14.343364 6876 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:57:14.343402 6876 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:57:14.343456 6876 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.343123 6876 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd/etcd for endpointslice openshift-etcd/etcd-4gsrx as it is not a known egress service\\\\nI0224 09:57:14.343744 6876 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 09:57:14.343810 6876 egressservice_zone_node.go:110] Processing sync for Egress Service node crc\\\\nI0224 09:57:14.343848 6876 egressservice_zone_node.go:113] Finished syncing Egress Service node crc: 42.411µs\\\\nI0224 09:57:14.343932 6876 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.344465 6876 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:57:14.344511 6876 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:57:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.796958 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.815243 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.838407 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.860262 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.881900 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.906586 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.928307 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:16 crc kubenswrapper[4755]: I0224 09:57:16.952574 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:16Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:17 crc kubenswrapper[4755]: I0224 09:57:17.316054 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:17 crc kubenswrapper[4755]: I0224 09:57:17.316060 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:17 crc kubenswrapper[4755]: I0224 09:57:17.316296 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:17 crc kubenswrapper[4755]: E0224 09:57:17.316416 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:17 crc kubenswrapper[4755]: E0224 09:57:17.316536 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:17 crc kubenswrapper[4755]: E0224 09:57:17.316667 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:17 crc kubenswrapper[4755]: I0224 09:57:17.316915 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:17 crc kubenswrapper[4755]: E0224 09:57:17.317239 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:17 crc kubenswrapper[4755]: I0224 09:57:17.956286 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:17 crc kubenswrapper[4755]: I0224 09:57:17.956366 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:17 crc kubenswrapper[4755]: I0224 09:57:17.956390 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:17 crc kubenswrapper[4755]: I0224 09:57:17.956421 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:17 crc kubenswrapper[4755]: I0224 09:57:17.956485 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:17Z","lastTransitionTime":"2026-02-24T09:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:17 crc kubenswrapper[4755]: E0224 09:57:17.978160 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:17Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:17 crc kubenswrapper[4755]: I0224 09:57:17.984782 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:17 crc kubenswrapper[4755]: I0224 09:57:17.985007 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:17 crc kubenswrapper[4755]: I0224 09:57:17.985217 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:17 crc kubenswrapper[4755]: I0224 09:57:17.985381 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:17 crc kubenswrapper[4755]: I0224 09:57:17.985547 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:17Z","lastTransitionTime":"2026-02-24T09:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:18 crc kubenswrapper[4755]: E0224 09:57:18.007034 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.012659 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.012726 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.012748 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.012777 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.012800 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:18Z","lastTransitionTime":"2026-02-24T09:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:18 crc kubenswrapper[4755]: E0224 09:57:18.034915 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.042250 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.042356 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.042376 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.042402 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.042423 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:18Z","lastTransitionTime":"2026-02-24T09:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:18 crc kubenswrapper[4755]: E0224 09:57:18.065922 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.071731 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.071806 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.071824 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.071855 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:18 crc kubenswrapper[4755]: I0224 09:57:18.071875 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:18Z","lastTransitionTime":"2026-02-24T09:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:18 crc kubenswrapper[4755]: E0224 09:57:18.093166 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:18Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:18 crc kubenswrapper[4755]: E0224 09:57:18.093407 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:57:19 crc kubenswrapper[4755]: I0224 09:57:19.316511 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:19 crc kubenswrapper[4755]: I0224 09:57:19.316579 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:19 crc kubenswrapper[4755]: I0224 09:57:19.316556 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:19 crc kubenswrapper[4755]: I0224 09:57:19.316511 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:19 crc kubenswrapper[4755]: E0224 09:57:19.316774 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:19 crc kubenswrapper[4755]: E0224 09:57:19.316875 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:19 crc kubenswrapper[4755]: E0224 09:57:19.317031 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:19 crc kubenswrapper[4755]: E0224 09:57:19.317192 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:21 crc kubenswrapper[4755]: I0224 09:57:21.315882 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:21 crc kubenswrapper[4755]: I0224 09:57:21.315982 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:21 crc kubenswrapper[4755]: I0224 09:57:21.316046 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:21 crc kubenswrapper[4755]: E0224 09:57:21.316223 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:21 crc kubenswrapper[4755]: E0224 09:57:21.316612 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:21 crc kubenswrapper[4755]: E0224 09:57:21.316851 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:21 crc kubenswrapper[4755]: I0224 09:57:21.317024 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:21 crc kubenswrapper[4755]: E0224 09:57:21.317230 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:21 crc kubenswrapper[4755]: E0224 09:57:21.458712 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:57:23 crc kubenswrapper[4755]: I0224 09:57:23.315639 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:23 crc kubenswrapper[4755]: I0224 09:57:23.315711 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:23 crc kubenswrapper[4755]: I0224 09:57:23.315822 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:23 crc kubenswrapper[4755]: E0224 09:57:23.315814 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:23 crc kubenswrapper[4755]: I0224 09:57:23.315903 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:23 crc kubenswrapper[4755]: E0224 09:57:23.316132 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:23 crc kubenswrapper[4755]: E0224 09:57:23.316323 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:23 crc kubenswrapper[4755]: E0224 09:57:23.316387 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:25 crc kubenswrapper[4755]: I0224 09:57:25.295688 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:57:25 crc kubenswrapper[4755]: I0224 09:57:25.295863 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:25 crc kubenswrapper[4755]: I0224 09:57:25.295937 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.296018 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:29.295978181 +0000 UTC m=+213.752500754 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.296122 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:57:25 crc kubenswrapper[4755]: I0224 09:57:25.296125 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.296202 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:58:29.296178497 +0000 UTC m=+213.752701080 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.296267 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.296346 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.296384 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.296405 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.296416 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:58:29.296387794 +0000 UTC m=+213.752910377 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.296470 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:58:29.296447846 +0000 UTC m=+213.752970419 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:57:25 crc kubenswrapper[4755]: I0224 09:57:25.315853 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:25 crc kubenswrapper[4755]: I0224 09:57:25.315899 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:25 crc kubenswrapper[4755]: I0224 09:57:25.315863 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:25 crc kubenswrapper[4755]: I0224 09:57:25.315852 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.316028 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.316195 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.316385 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.316443 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:25 crc kubenswrapper[4755]: I0224 09:57:25.397502 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:25 crc kubenswrapper[4755]: I0224 09:57:25.397576 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.397711 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.397714 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.397872 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs podName:82775556-3991-45ab-ac50-7ef81cafeaee nodeName:}" failed. No retries permitted until 2026-02-24 09:58:29.397835356 +0000 UTC m=+213.854357949 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs") pod "network-metrics-daemon-98t22" (UID: "82775556-3991-45ab-ac50-7ef81cafeaee") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.397729 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.397918 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:57:25 crc kubenswrapper[4755]: E0224 09:57:25.397986 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:58:29.39796717 +0000 UTC m=+213.854489753 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.341754 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.360289 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.375896 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.407926 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:14Z\\\",\\\"message\\\":\\\"60\\\\nI0224 09:57:14.343342 6876 factory.go:656] Stopping watch factory\\\\nI0224 09:57:14.343363 6876 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:57:14.343364 6876 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:57:14.343402 6876 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:57:14.343456 6876 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.343123 6876 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd/etcd for endpointslice openshift-etcd/etcd-4gsrx as it is not a known egress service\\\\nI0224 09:57:14.343744 6876 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 09:57:14.343810 6876 egressservice_zone_node.go:110] Processing sync for Egress Service node crc\\\\nI0224 09:57:14.343848 6876 egressservice_zone_node.go:113] Finished syncing Egress Service node crc: 42.411µs\\\\nI0224 09:57:14.343932 6876 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.344465 6876 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:57:14.344511 6876 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:57:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.421489 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.438833 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: E0224 09:57:26.459781 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.459668 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.480204 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.496247 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.522462 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.583093 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.599721 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.613059 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.627292 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.639925 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.654838 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc842a89-ef27-4774-83e7-cd82d2be82d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e84672e8d77bad12c4a89084f19b7bab0dd4c0caaaa038c165095b2d509910b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7da773bd75b944d3f90cf4f68330fccbbfceb6441356b2561b48f2d7d90a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c839041bf4a2cd96fe63f169ec201656be8f3343fb5398ecce2c2c0848e9987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.677936 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:26 crc kubenswrapper[4755]: I0224 09:57:26.696797 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:27 crc kubenswrapper[4755]: I0224 09:57:27.316267 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:27 crc kubenswrapper[4755]: I0224 09:57:27.316315 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:27 crc kubenswrapper[4755]: I0224 09:57:27.316378 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:27 crc kubenswrapper[4755]: I0224 09:57:27.316485 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:27 crc kubenswrapper[4755]: E0224 09:57:27.316476 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:27 crc kubenswrapper[4755]: E0224 09:57:27.316628 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:27 crc kubenswrapper[4755]: E0224 09:57:27.316855 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:27 crc kubenswrapper[4755]: E0224 09:57:27.316902 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.317022 4755 scope.go:117] "RemoveContainer" containerID="a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20" Feb 24 09:57:28 crc kubenswrapper[4755]: E0224 09:57:28.317291 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.369662 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.369743 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.369761 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.369791 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.369809 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:28Z","lastTransitionTime":"2026-02-24T09:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:28 crc kubenswrapper[4755]: E0224 09:57:28.391184 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.396174 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.396234 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.396255 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.396281 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.396299 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:28Z","lastTransitionTime":"2026-02-24T09:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:28 crc kubenswrapper[4755]: E0224 09:57:28.415347 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.420837 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.420901 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.420917 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.420946 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.420962 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:28Z","lastTransitionTime":"2026-02-24T09:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:28 crc kubenswrapper[4755]: E0224 09:57:28.441467 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.447108 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.447190 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.447210 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.447268 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.447293 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:28Z","lastTransitionTime":"2026-02-24T09:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:28 crc kubenswrapper[4755]: E0224 09:57:28.468181 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.473757 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.473821 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.473838 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.473870 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:28 crc kubenswrapper[4755]: I0224 09:57:28.473888 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:28Z","lastTransitionTime":"2026-02-24T09:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:28 crc kubenswrapper[4755]: E0224 09:57:28.498384 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:28Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:28 crc kubenswrapper[4755]: E0224 09:57:28.498644 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:57:29 crc kubenswrapper[4755]: I0224 09:57:29.315871 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:29 crc kubenswrapper[4755]: I0224 09:57:29.315942 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:29 crc kubenswrapper[4755]: I0224 09:57:29.315981 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:29 crc kubenswrapper[4755]: I0224 09:57:29.316056 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:29 crc kubenswrapper[4755]: E0224 09:57:29.316061 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:29 crc kubenswrapper[4755]: E0224 09:57:29.316182 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:29 crc kubenswrapper[4755]: E0224 09:57:29.316319 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:29 crc kubenswrapper[4755]: E0224 09:57:29.316458 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:31 crc kubenswrapper[4755]: I0224 09:57:31.316344 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:31 crc kubenswrapper[4755]: I0224 09:57:31.316400 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:31 crc kubenswrapper[4755]: I0224 09:57:31.316373 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:31 crc kubenswrapper[4755]: I0224 09:57:31.316348 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:31 crc kubenswrapper[4755]: E0224 09:57:31.316531 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:31 crc kubenswrapper[4755]: E0224 09:57:31.316701 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:31 crc kubenswrapper[4755]: E0224 09:57:31.316757 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:31 crc kubenswrapper[4755]: E0224 09:57:31.316981 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:31 crc kubenswrapper[4755]: E0224 09:57:31.460669 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:57:33 crc kubenswrapper[4755]: I0224 09:57:33.316267 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:33 crc kubenswrapper[4755]: I0224 09:57:33.316311 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:33 crc kubenswrapper[4755]: I0224 09:57:33.316378 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:33 crc kubenswrapper[4755]: E0224 09:57:33.316456 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:33 crc kubenswrapper[4755]: I0224 09:57:33.316511 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:33 crc kubenswrapper[4755]: E0224 09:57:33.316662 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:33 crc kubenswrapper[4755]: E0224 09:57:33.316821 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:33 crc kubenswrapper[4755]: E0224 09:57:33.317004 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.172009 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dwm6v_79ca0953-3a40-45a2-9305-02272f036006/kube-multus/0.log" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.172567 4755 generic.go:334] "Generic (PLEG): container finished" podID="79ca0953-3a40-45a2-9305-02272f036006" containerID="66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61" exitCode=1 Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.172621 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dwm6v" event={"ID":"79ca0953-3a40-45a2-9305-02272f036006","Type":"ContainerDied","Data":"66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61"} Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.173268 4755 scope.go:117] "RemoveContainer" containerID="66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.194101 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.215816 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.240581 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.256739 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.277116 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:34Z\\\",\\\"message\\\":\\\"2026-02-24T09:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e\\\\n2026-02-24T09:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e to /host/opt/cni/bin/\\\\n2026-02-24T09:56:49Z [verbose] multus-daemon started\\\\n2026-02-24T09:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.298931 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.315837 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.315867 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:35 crc kubenswrapper[4755]: E0224 09:57:35.316012 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.316121 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.316239 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:35 crc kubenswrapper[4755]: E0224 09:57:35.316282 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:35 crc kubenswrapper[4755]: E0224 09:57:35.316412 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:35 crc kubenswrapper[4755]: E0224 09:57:35.316528 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.331439 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.350171 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.365587 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.380584 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc842a89-ef27-4774-83e7-cd82d2be82d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e84672e8d77bad12c4a89084f19b7bab0dd4c0caaaa038c165095b2d509910b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7da773bd75b944d3f90cf4f68330fccbbfceb6441356b2561b48f2d7d90a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c839041bf4a2cd96fe63f169ec201656be8f3343fb5398ecce2c2c0848e9987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.397219 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.415230 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.431213 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.447232 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.468720 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.483826 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.500442 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:35 crc kubenswrapper[4755]: I0224 09:57:35.523849 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:14Z\\\",\\\"message\\\":\\\"60\\\\nI0224 09:57:14.343342 6876 factory.go:656] Stopping watch factory\\\\nI0224 09:57:14.343363 6876 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:57:14.343364 6876 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:57:14.343402 6876 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:57:14.343456 6876 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.343123 6876 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd/etcd for endpointslice openshift-etcd/etcd-4gsrx as it is not a known egress service\\\\nI0224 09:57:14.343744 6876 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 09:57:14.343810 6876 egressservice_zone_node.go:110] Processing sync for Egress Service node crc\\\\nI0224 09:57:14.343848 6876 egressservice_zone_node.go:113] Finished syncing Egress Service node crc: 42.411µs\\\\nI0224 09:57:14.343932 6876 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.344465 6876 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:57:14.344511 6876 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:57:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:35Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.178922 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dwm6v_79ca0953-3a40-45a2-9305-02272f036006/kube-multus/0.log" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.179010 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dwm6v" event={"ID":"79ca0953-3a40-45a2-9305-02272f036006","Type":"ContainerStarted","Data":"2b92711a65ad6cebe8ab7873f779cbee9056647a29d6fa95b4000b68ed70322b"} Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.203587 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.218967 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.236948 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.250125 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc842a89-ef27-4774-83e7-cd82d2be82d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e84672e8d77bad12c4a89084f19b7bab0dd4c0caaaa038c165095b2d509910b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7da773bd75b944d3f90cf4f68330fccbbfceb6441356b2561b48f2d7d90a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c839041bf4a2cd96fe63f169ec201656be8f3343fb5398ecce2c2c0848e9987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.265579 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.285784 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:14Z\\\",\\\"message\\\":\\\"60\\\\nI0224 09:57:14.343342 6876 factory.go:656] Stopping watch factory\\\\nI0224 09:57:14.343363 6876 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:57:14.343364 6876 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:57:14.343402 6876 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:57:14.343456 6876 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.343123 6876 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd/etcd for endpointslice openshift-etcd/etcd-4gsrx as it is not a known egress service\\\\nI0224 09:57:14.343744 6876 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 09:57:14.343810 6876 egressservice_zone_node.go:110] Processing sync for Egress Service node crc\\\\nI0224 09:57:14.343848 6876 egressservice_zone_node.go:113] Finished syncing Egress Service node crc: 42.411µs\\\\nI0224 09:57:14.343932 6876 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.344465 6876 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:57:14.344511 6876 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:57:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.304003 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.317312 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.328727 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.345484 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.365250 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b92711a65ad6cebe8ab7873f779cbee9056647a29d6fa95b4000b68ed70322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:34Z\\\",\\\"message\\\":\\\"2026-02-24T09:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e\\\\n2026-02-24T09:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e to /host/opt/cni/bin/\\\\n2026-02-24T09:56:49Z [verbose] multus-daemon started\\\\n2026-02-24T09:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.388494 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.407312 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.429831 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.453569 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: E0224 09:57:36.461656 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.476907 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.497128 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.513367 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.540160 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.561969 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.579527 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.598553 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.613971 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.630194 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.647666 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc842a89-ef27-4774-83e7-cd82d2be82d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e84672e8d77bad12c4a89084f19b7bab0dd4c0caaaa038c165095b2d509910b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7da773bd75b944d3f90cf4f68330fccbbfceb6441356b2561b48f2d7d90a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c839041bf4a2cd96fe63f169ec201656be8f3343fb5398ecce2c2c0848e9987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.667173 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.698354 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:14Z\\\",\\\"message\\\":\\\"60\\\\nI0224 09:57:14.343342 6876 factory.go:656] Stopping watch factory\\\\nI0224 09:57:14.343363 6876 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:57:14.343364 6876 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:57:14.343402 6876 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:57:14.343456 6876 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.343123 6876 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd/etcd for endpointslice openshift-etcd/etcd-4gsrx as it is not a known egress service\\\\nI0224 09:57:14.343744 6876 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 09:57:14.343810 6876 egressservice_zone_node.go:110] Processing sync for Egress Service node crc\\\\nI0224 09:57:14.343848 6876 egressservice_zone_node.go:113] Finished syncing Egress Service node crc: 42.411µs\\\\nI0224 09:57:14.343932 6876 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.344465 6876 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:57:14.344511 6876 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:57:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.714584 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.728232 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.744915 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.771288 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.789299 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b92711a65ad6cebe8ab7873f779cbee9056647a29d6fa95b4000b68ed70322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:34Z\\\",\\\"message\\\":\\\"2026-02-24T09:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e\\\\n2026-02-24T09:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e to /host/opt/cni/bin/\\\\n2026-02-24T09:56:49Z [verbose] multus-daemon started\\\\n2026-02-24T09:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.808816 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.825459 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.842762 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:36 crc kubenswrapper[4755]: I0224 09:57:36.867633 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:36Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:37 crc kubenswrapper[4755]: I0224 09:57:37.315574 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:37 crc kubenswrapper[4755]: I0224 09:57:37.315642 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:37 crc kubenswrapper[4755]: E0224 09:57:37.315807 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:37 crc kubenswrapper[4755]: E0224 09:57:37.315926 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:37 crc kubenswrapper[4755]: I0224 09:57:37.316340 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:37 crc kubenswrapper[4755]: E0224 09:57:37.316470 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:37 crc kubenswrapper[4755]: I0224 09:57:37.316519 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:37 crc kubenswrapper[4755]: E0224 09:57:37.316686 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.894763 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.894841 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.894866 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.894900 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.894923 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:38Z","lastTransitionTime":"2026-02-24T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:38 crc kubenswrapper[4755]: E0224 09:57:38.916651 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.921924 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.921959 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.921970 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.921988 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.922000 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:38Z","lastTransitionTime":"2026-02-24T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:38 crc kubenswrapper[4755]: E0224 09:57:38.943311 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.948695 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.948732 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.948743 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.948759 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.948770 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:38Z","lastTransitionTime":"2026-02-24T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:38 crc kubenswrapper[4755]: E0224 09:57:38.964049 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.968566 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.968602 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.968614 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.968630 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.968642 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:38Z","lastTransitionTime":"2026-02-24T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:38 crc kubenswrapper[4755]: E0224 09:57:38.985370 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:38Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.990432 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.990485 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.990504 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.990531 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:38 crc kubenswrapper[4755]: I0224 09:57:38.990549 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:38Z","lastTransitionTime":"2026-02-24T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:39 crc kubenswrapper[4755]: E0224 09:57:39.011795 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:39Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:39 crc kubenswrapper[4755]: E0224 09:57:39.012133 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:57:39 crc kubenswrapper[4755]: I0224 09:57:39.315905 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:39 crc kubenswrapper[4755]: I0224 09:57:39.316027 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:39 crc kubenswrapper[4755]: I0224 09:57:39.316114 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:39 crc kubenswrapper[4755]: E0224 09:57:39.316160 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:39 crc kubenswrapper[4755]: I0224 09:57:39.315926 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:39 crc kubenswrapper[4755]: E0224 09:57:39.316868 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:39 crc kubenswrapper[4755]: E0224 09:57:39.316990 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:39 crc kubenswrapper[4755]: E0224 09:57:39.317210 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:39 crc kubenswrapper[4755]: I0224 09:57:39.317427 4755 scope.go:117] "RemoveContainer" containerID="a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.195924 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/2.log" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.199614 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerStarted","Data":"dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7"} Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.200223 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.218147 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.300387 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.334259 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.355325 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.376181 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.398011 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.420003 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.440047 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc842a89-ef27-4774-83e7-cd82d2be82d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e84672e8d77bad12c4a89084f19b7bab0dd4c0caaaa038c165095b2d509910b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7da773bd75b944d3f90cf4f68330fccbbfceb6441356b2561b48f2d7d90a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c839041bf4a2cd96fe63f169ec201656be8f3343fb5398ecce2c2c0848e9987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.457903 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.493839 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:14Z\\\",\\\"message\\\":\\\"60\\\\nI0224 09:57:14.343342 6876 factory.go:656] Stopping watch factory\\\\nI0224 09:57:14.343363 6876 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:57:14.343364 6876 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:57:14.343402 6876 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:57:14.343456 6876 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.343123 6876 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd/etcd for endpointslice openshift-etcd/etcd-4gsrx as it is not a known egress service\\\\nI0224 09:57:14.343744 6876 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 09:57:14.343810 6876 egressservice_zone_node.go:110] Processing sync for Egress Service node crc\\\\nI0224 09:57:14.343848 6876 egressservice_zone_node.go:113] Finished syncing Egress Service node crc: 42.411µs\\\\nI0224 09:57:14.343932 6876 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.344465 6876 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:57:14.344511 6876 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:57:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.517282 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.539303 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.568137 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.593605 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.613029 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b92711a65ad6cebe8ab7873f779cbee9056647a29d6fa95b4000b68ed70322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:34Z\\\",\\\"message\\\":\\\"2026-02-24T09:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e\\\\n2026-02-24T09:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e to /host/opt/cni/bin/\\\\n2026-02-24T09:56:49Z [verbose] multus-daemon started\\\\n2026-02-24T09:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.636416 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.653781 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:40 crc kubenswrapper[4755]: I0224 09:57:40.672184 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:40Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.208129 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/3.log" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.209943 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/2.log" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.214199 4755 generic.go:334] "Generic (PLEG): container finished" podID="787109ef-edb9-4334-afc7-6197f57f444f" containerID="dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7" exitCode=1 Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.214366 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerDied","Data":"dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7"} Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.214758 4755 scope.go:117] "RemoveContainer" containerID="a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.215812 4755 scope.go:117] "RemoveContainer" containerID="dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7" Feb 24 09:57:41 crc kubenswrapper[4755]: E0224 09:57:41.216180 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.245696 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc842a89-ef27-4774-83e7-cd82d2be82d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e84672e8d77bad12c4a89084f19b7bab0dd4c0caaaa038c165095b2d509910b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7da773bd75b944d3f90cf4f68330fccbbfceb6441356b2561b48f2d7d90a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c839041bf4a2cd96fe63f169ec201656be8f3343fb5398ecce2c2c0848e9987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.263971 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.281993 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.300994 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.316091 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:41 crc kubenswrapper[4755]: E0224 09:57:41.316487 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.316176 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.316826 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.316216 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.316117 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:41 crc kubenswrapper[4755]: E0224 09:57:41.317311 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:41 crc kubenswrapper[4755]: E0224 09:57:41.317505 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:41 crc kubenswrapper[4755]: E0224 09:57:41.317686 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.337503 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.355523 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.371284 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.396931 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a61f4caf21c9780a2e93a55da6b67bfcf0a047c5028f4536c01aca1f58262c20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:14Z\\\",\\\"message\\\":\\\"60\\\\nI0224 09:57:14.343342 6876 factory.go:656] Stopping watch factory\\\\nI0224 09:57:14.343363 6876 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 09:57:14.343364 6876 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 09:57:14.343402 6876 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 09:57:14.343456 6876 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.343123 6876 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd/etcd for endpointslice openshift-etcd/etcd-4gsrx as it is not a known egress service\\\\nI0224 09:57:14.343744 6876 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 09:57:14.343810 6876 egressservice_zone_node.go:110] Processing sync for Egress Service node crc\\\\nI0224 09:57:14.343848 6876 egressservice_zone_node.go:113] Finished syncing Egress Service node crc: 42.411µs\\\\nI0224 09:57:14.343932 6876 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 09:57:14.344465 6876 ovnkube.go:599] Stopped ovnkube\\\\nI0224 09:57:14.344511 6876 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0224 09:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:57:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:40Z\\\",\\\"message\\\":\\\"false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.250\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0224 09:57:40.408379 7159 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}\\\\nF0224 09:57:40.408587 7159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.417171 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.439898 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.461910 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: E0224 09:57:41.463360 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.481738 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.502427 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b92711a65ad6cebe8ab7873f779cbee9056647a29d6fa95b4000b68ed70322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:34Z\\\",\\\"message\\\":\\\"2026-02-24T09:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e\\\\n2026-02-24T09:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e to /host/opt/cni/bin/\\\\n2026-02-24T09:56:49Z [verbose] multus-daemon started\\\\n2026-02-24T09:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.527701 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.570244 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.591737 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:41 crc kubenswrapper[4755]: I0224 09:57:41.611028 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:41Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.220423 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/3.log" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.226456 4755 scope.go:117] "RemoveContainer" containerID="dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7" Feb 24 09:57:42 crc kubenswrapper[4755]: E0224 09:57:42.226929 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.245930 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.267277 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.293769 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.313113 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.331447 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b92711a65ad6cebe8ab7873f779cbee9056647a29d6fa95b4000b68ed70322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:34Z\\\",\\\"message\\\":\\\"2026-02-24T09:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e\\\\n2026-02-24T09:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e to /host/opt/cni/bin/\\\\n2026-02-24T09:56:49Z [verbose] multus-daemon started\\\\n2026-02-24T09:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.355430 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.390633 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.408604 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.426507 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.445134 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc842a89-ef27-4774-83e7-cd82d2be82d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e84672e8d77bad12c4a89084f19b7bab0dd4c0caaaa038c165095b2d509910b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7da773bd75b944d3f90cf4f68330fccbbfceb6441356b2561b48f2d7d90a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c839041bf4a2cd96fe63f169ec201656be8f3343fb5398ecce2c2c0848e9987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.463940 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.482572 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.500354 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.515306 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.531106 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.546241 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.561734 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:42 crc kubenswrapper[4755]: I0224 09:57:42.592897 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:40Z\\\",\\\"message\\\":\\\"false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.250\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0224 09:57:40.408379 7159 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}\\\\nF0224 09:57:40.408587 7159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:42Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:43 crc kubenswrapper[4755]: I0224 09:57:43.316099 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:43 crc kubenswrapper[4755]: I0224 09:57:43.316139 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:43 crc kubenswrapper[4755]: I0224 09:57:43.316113 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:43 crc kubenswrapper[4755]: I0224 09:57:43.316265 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:43 crc kubenswrapper[4755]: E0224 09:57:43.316387 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:43 crc kubenswrapper[4755]: E0224 09:57:43.316588 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:43 crc kubenswrapper[4755]: E0224 09:57:43.316697 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:43 crc kubenswrapper[4755]: E0224 09:57:43.316784 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:45 crc kubenswrapper[4755]: I0224 09:57:45.316330 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:45 crc kubenswrapper[4755]: I0224 09:57:45.316385 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:45 crc kubenswrapper[4755]: I0224 09:57:45.316430 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:45 crc kubenswrapper[4755]: E0224 09:57:45.316547 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:45 crc kubenswrapper[4755]: I0224 09:57:45.316574 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:45 crc kubenswrapper[4755]: E0224 09:57:45.316699 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:45 crc kubenswrapper[4755]: E0224 09:57:45.317510 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:45 crc kubenswrapper[4755]: E0224 09:57:45.317752 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:45 crc kubenswrapper[4755]: I0224 09:57:45.332379 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.336615 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.354614 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebfbb9b8-05cb-4a42-a112-8436eff86d69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e018bfbfc779c88c09f9a0d316c3ed04f815ee4b2b7ec6efe15eba1de075939e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8208ba5fe11546cc7dafb93073843511b97889983edac56135c17a41ae318f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8208ba5fe11546cc7dafb93073843511b97889983edac56135c17a41ae318f48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.374840 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc842a89-ef27-4774-83e7-cd82d2be82d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e84672e8d77bad12c4a89084f19b7bab0dd4c0caaaa038c165095b2d509910b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7da773bd75b944d3f90cf4f68330fccbbfceb6441356b2561b48f2d7d90a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c839041bf4a2cd96fe63f169ec201656be8f3343fb5398ecce2c2c0848e9987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.397667 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.419541 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.433300 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.456439 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: E0224 09:57:46.465165 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.473428 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.488306 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.521734 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:40Z\\\",\\\"message\\\":\\\"false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.250\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0224 09:57:40.408379 7159 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}\\\\nF0224 09:57:40.408587 7159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.542179 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.553369 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.568183 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.582773 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.601312 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.621024 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b92711a65ad6cebe8ab7873f779cbee9056647a29d6fa95b4000b68ed70322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:34Z\\\",\\\"message\\\":\\\"2026-02-24T09:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e\\\\n2026-02-24T09:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e to /host/opt/cni/bin/\\\\n2026-02-24T09:56:49Z [verbose] multus-daemon started\\\\n2026-02-24T09:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.653714 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.669422 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:46 crc kubenswrapper[4755]: I0224 09:57:46.685274 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:46Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:47 crc kubenswrapper[4755]: I0224 09:57:47.316374 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:47 crc kubenswrapper[4755]: I0224 09:57:47.316393 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:47 crc kubenswrapper[4755]: I0224 09:57:47.316463 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:47 crc kubenswrapper[4755]: I0224 09:57:47.316559 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:47 crc kubenswrapper[4755]: E0224 09:57:47.316727 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:47 crc kubenswrapper[4755]: E0224 09:57:47.316878 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:47 crc kubenswrapper[4755]: E0224 09:57:47.317031 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:47 crc kubenswrapper[4755]: E0224 09:57:47.317243 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.287781 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.287854 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.287878 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.287910 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.287934 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:49Z","lastTransitionTime":"2026-02-24T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.315588 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.315632 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:49 crc kubenswrapper[4755]: E0224 09:57:49.315828 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.315901 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.316025 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:49 crc kubenswrapper[4755]: E0224 09:57:49.316260 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:49 crc kubenswrapper[4755]: E0224 09:57:49.316452 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:49 crc kubenswrapper[4755]: E0224 09:57:49.316567 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:49 crc kubenswrapper[4755]: E0224 09:57:49.318990 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.325658 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.325702 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.325721 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.325748 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.325767 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:49Z","lastTransitionTime":"2026-02-24T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:49 crc kubenswrapper[4755]: E0224 09:57:49.345956 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.353121 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.353179 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.353192 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.353214 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.353227 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:49Z","lastTransitionTime":"2026-02-24T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:49 crc kubenswrapper[4755]: E0224 09:57:49.374161 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.380196 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.380258 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.380277 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.380311 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.380332 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:49Z","lastTransitionTime":"2026-02-24T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:49 crc kubenswrapper[4755]: E0224 09:57:49.402436 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.407723 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.407768 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.407786 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.407810 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:49 crc kubenswrapper[4755]: I0224 09:57:49.407827 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:49Z","lastTransitionTime":"2026-02-24T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:49 crc kubenswrapper[4755]: E0224 09:57:49.424255 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"716e957f-e154-4a81-a173-c5b7419cfbf1\\\",\\\"systemUUID\\\":\\\"19ae84c6-820e-4b63-8116-0dc0088d14e8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:49Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:49 crc kubenswrapper[4755]: E0224 09:57:49.424488 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:57:51 crc kubenswrapper[4755]: I0224 09:57:51.315817 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:51 crc kubenswrapper[4755]: I0224 09:57:51.316105 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:51 crc kubenswrapper[4755]: I0224 09:57:51.315887 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:51 crc kubenswrapper[4755]: I0224 09:57:51.315855 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:51 crc kubenswrapper[4755]: E0224 09:57:51.316278 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:51 crc kubenswrapper[4755]: E0224 09:57:51.316430 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:51 crc kubenswrapper[4755]: E0224 09:57:51.317049 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:51 crc kubenswrapper[4755]: E0224 09:57:51.317207 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:51 crc kubenswrapper[4755]: E0224 09:57:51.466590 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:57:53 crc kubenswrapper[4755]: I0224 09:57:53.315769 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:53 crc kubenswrapper[4755]: E0224 09:57:53.317022 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:53 crc kubenswrapper[4755]: I0224 09:57:53.315879 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:53 crc kubenswrapper[4755]: I0224 09:57:53.315990 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:53 crc kubenswrapper[4755]: E0224 09:57:53.317729 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:53 crc kubenswrapper[4755]: I0224 09:57:53.317337 4755 scope.go:117] "RemoveContainer" containerID="dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7" Feb 24 09:57:53 crc kubenswrapper[4755]: E0224 09:57:53.317580 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:53 crc kubenswrapper[4755]: I0224 09:57:53.315866 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:53 crc kubenswrapper[4755]: E0224 09:57:53.318000 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:53 crc kubenswrapper[4755]: E0224 09:57:53.318519 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" Feb 24 09:57:55 crc kubenswrapper[4755]: I0224 09:57:55.316059 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:55 crc kubenswrapper[4755]: I0224 09:57:55.316169 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:55 crc kubenswrapper[4755]: I0224 09:57:55.316178 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:55 crc kubenswrapper[4755]: I0224 09:57:55.316178 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:55 crc kubenswrapper[4755]: E0224 09:57:55.316322 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:55 crc kubenswrapper[4755]: E0224 09:57:55.316431 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:55 crc kubenswrapper[4755]: E0224 09:57:55.316518 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:55 crc kubenswrapper[4755]: E0224 09:57:55.316580 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.333880 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebfbb9b8-05cb-4a42-a112-8436eff86d69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e018bfbfc779c88c09f9a0d316c3ed04f815ee4b2b7ec6efe15eba1de075939e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8208ba5fe11546cc7dafb93073843511b97889983edac56135c17a41ae318f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8208ba5fe11546cc7dafb93073843511b97889983edac56135c17a41ae318f48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.353614 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc842a89-ef27-4774-83e7-cd82d2be82d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e84672e8d77bad12c4a89084f19b7bab0dd4c0caaaa038c165095b2d509910b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7da773bd75b944d3f90cf4f68330fccbbfceb6441356b2561b48f2d7d90a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c839041bf4a2cd96fe63f169ec201656be8f3343fb5398ecce2c2c0848e9987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759fd1b6df255b7624a2244ad204e14c6560519586c01b68743fe14ae93af9c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.373666 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b544a157420dc72e6b3eaeab779a748e957175d9898d5d6816004a0dcb69b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.392901 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.407760 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6407399-185a-4b27-bd1d-d3816e43a0b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a4665d1665fdbf99f9a666c3de3aa9835243bd523f68100bbae5b83849267e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwtmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.423882 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ca669cb-3429-4187-bee6-232dbd316c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec8c93fabf613db99cae64e45342c0263ce4706f4a6088525ae93d808e73117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bvck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.450849 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: E0224 09:57:56.467998 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.471403 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98t22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82775556-3991-45ab-ac50-7ef81cafeaee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m82v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98t22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.491728 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2cmfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04c132ba-c396-4f64-a02b-fcdae681ed74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369d192fa45ed08846e45407d109655164fa2eaef004397272f71dcbc6cc1168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzxz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2cmfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.525388 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"787109ef-edb9-4334-afc7-6197f57f444f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:40Z\\\",\\\"message\\\":\\\"false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.250\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0224 09:57:40.408379 7159 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}\\\\nF0224 09:57:40.408587 7159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fljft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.546989 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56624072-fac5-436e-998d-4fa33ae69d81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd1e718cfab9164e39a5c55224adf4bbdd5f079f640714fee46be8a4fb0f976e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286ba8a2e826f23c678d6ebfecc3f21235bc71057ad52d80e3790c2740263f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:55:22Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 09:54:58.419448 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 09:54:58.422385 1 observer_polling.go:159] Starting file observer\\\\nI0224 09:54:58.459101 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 09:54:58.464522 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 09:55:22.511015 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 09:55:22.511136 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3660f48bfc0f08822a77d63d8f65807eaf1cf8ef7da4b7db349cbc61e88f732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486ea75c3392dcb5867446a7b70131ed7f1dc0e0dc8ad436679d16b6eb4c3e8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.573669 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1574b657-3607-40b8-9c2e-1a056ab20b00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T09:56:10Z\\\",\\\"message\\\":\\\"le observer\\\\nW0224 09:56:09.846418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 09:56:09.846577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 09:56:09.847829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825505641/tls.crt::/tmp/serving-cert-1825505641/tls.key\\\\\\\"\\\\nI0224 09:56:10.304466 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 09:56:10.308822 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 09:56:10.308949 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 09:56:10.309048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 09:56:10.309173 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 09:56:10.318027 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 09:56:10.318104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 09:56:10.318119 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318232 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 09:56:10.318245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 09:56:10.318254 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 09:56:10.318263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 09:56:10.318271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 09:56:10.319660 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.596842 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7931012967914697d49e7abae1389ab33fc4f7dabebf6456e24f0c516fbabba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.617191 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b10145813336bd21582f742c6e3df6d2396393de7fc7e78fbc0e70a6c6d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84013d80984ebc9c9d4c73a0617456141722f137ac0f1580b0e53c775cfd32fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.642331 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dwm6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ca0953-3a40-45a2-9305-02272f036006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b92711a65ad6cebe8ab7873f779cbee9056647a29d6fa95b4000b68ed70322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T09:57:34Z\\\",\\\"message\\\":\\\"2026-02-24T09:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e\\\\n2026-02-24T09:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_881c74aa-9cbf-4b4f-aac0-01d769478c7e to /host/opt/cni/bin/\\\\n2026-02-24T09:56:49Z [verbose] multus-daemon started\\\\n2026-02-24T09:56:49Z [verbose] Readiness Indicator file check\\\\n2026-02-24T09:57:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w29tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dwm6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.668722 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8t77m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f842f5c8-ff09-48b2-9805-ad9de28e2ea7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081489b8f483d866b5c6621e19acff61190648a2815bf2af14dd8ca05248be90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fa1f2cd68a7fa9323c92b561b7602a1c80de43bbcf38fe6cd83f17334771a8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2a4e03c62efd1e0d272a12886a996e1e4579b9b82417a7e2212fe6d9e8b51d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c73dc63b3b0d014e37e5d70c9edba5c8f44ae796dd5d11b67d29953e1d17bcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e08313793b61c2b950bd2e6209b06b5dfa03df7d93738c6a315a05fd268b5f2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4f6177b3aa515b4dfdcc153209b44632f7706ec581e5a45a319d0d43586133\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0a0999b9baaa58b118403e4b2fbce32502623b93aadd8658312b074e616d24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpwjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8t77m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.707519 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7fddb2-086d-4b68-929f-f8726b9705a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24fcc1c08dcb7c05db9c47d67347ffd1dffa20d815749bdc508b831d24553d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://138a9fc335292eb99f119dc794cf775cc0fa94601b2847d67b9eee29f1e55f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab11542bad69aeeeb2c5e9d1af735897f4a9f80fc0584be082cff6c8b8b8d701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c51cf31addfdca3550ac527f767d1a9fb5dd09e2f5cb4f5b6d40630771ee15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf679788f20ef75ae7aabbd9887a524dde0c2c66cb4411c2e656a19f408eea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c850ec8da4212286e3938b8b784b007d79955787bbf1b71d883e4eb15ff16c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ba1bf116d40c6c790d66895a2b98e2216eb8113908a376c7d913e4a6a0d56b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d034fa73c2a03c2486516296598882789bec8e3acdb674ef222da7126ed87c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T09:54:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:54:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.729477 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:56 crc kubenswrapper[4755]: I0224 09:57:56.748405 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dec1056d-97aa-4dfc-a63d-d729dfdb88f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T09:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0072cc284e4837caacd9e1582f9212ebbb8e0bb50ef76ff1829681fd62a31081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a047f115755f45594fd14de8c9e60e32be11b913771d184d9ebae037b994e189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h28np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T09:56:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-85cjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:57:56Z is after 2025-08-24T17:21:41Z" Feb 24 09:57:57 crc kubenswrapper[4755]: I0224 09:57:57.315381 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:57 crc kubenswrapper[4755]: I0224 09:57:57.315865 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:57 crc kubenswrapper[4755]: I0224 09:57:57.316224 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:57 crc kubenswrapper[4755]: I0224 09:57:57.316297 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:57 crc kubenswrapper[4755]: E0224 09:57:57.316613 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:57 crc kubenswrapper[4755]: E0224 09:57:57.316456 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:57 crc kubenswrapper[4755]: E0224 09:57:57.316944 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:57 crc kubenswrapper[4755]: E0224 09:57:57.317125 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.316533 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.316621 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.316655 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.316900 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:57:59 crc kubenswrapper[4755]: E0224 09:57:59.317037 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:57:59 crc kubenswrapper[4755]: E0224 09:57:59.317371 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:57:59 crc kubenswrapper[4755]: E0224 09:57:59.317473 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:57:59 crc kubenswrapper[4755]: E0224 09:57:59.317565 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.438437 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.438532 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.438557 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.438590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.438612 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:57:59Z","lastTransitionTime":"2026-02-24T09:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.522403 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65"] Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.522812 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.526255 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.527515 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.528135 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.534485 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.555751 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=62.55572569 podStartE2EDuration="1m2.55572569s" podCreationTimestamp="2026-02-24 09:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:57:59.554105706 +0000 UTC m=+184.010628289" watchObservedRunningTime="2026-02-24 09:57:59.55572569 +0000 UTC m=+184.012248263" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.596040 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0773bdc1-4034-4cf4-991b-3438f727ff54-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.596667 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0773bdc1-4034-4cf4-991b-3438f727ff54-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.596845 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0773bdc1-4034-4cf4-991b-3438f727ff54-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.597000 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0773bdc1-4034-4cf4-991b-3438f727ff54-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.597232 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0773bdc1-4034-4cf4-991b-3438f727ff54-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.602183 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.602154713 podStartE2EDuration="1m30.602154713s" podCreationTimestamp="2026-02-24 09:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:57:59.582600393 +0000 UTC m=+184.039122946" watchObservedRunningTime="2026-02-24 09:57:59.602154713 +0000 UTC m=+184.058677296" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.648205 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dwm6v" podStartSLOduration=132.648181882 podStartE2EDuration="2m12.648181882s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:57:59.647708987 +0000 UTC m=+184.104231540" watchObservedRunningTime="2026-02-24 09:57:59.648181882 +0000 UTC m=+184.104704455" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.673251 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8t77m" podStartSLOduration=132.673231155 podStartE2EDuration="2m12.673231155s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:57:59.672634285 +0000 UTC m=+184.129156868" watchObservedRunningTime="2026-02-24 09:57:59.673231155 +0000 UTC m=+184.129753738" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.698775 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0773bdc1-4034-4cf4-991b-3438f727ff54-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.698840 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0773bdc1-4034-4cf4-991b-3438f727ff54-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.698871 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0773bdc1-4034-4cf4-991b-3438f727ff54-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.698922 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0773bdc1-4034-4cf4-991b-3438f727ff54-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.698951 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0773bdc1-4034-4cf4-991b-3438f727ff54-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.699097 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0773bdc1-4034-4cf4-991b-3438f727ff54-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.700308 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0773bdc1-4034-4cf4-991b-3438f727ff54-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.701269 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0773bdc1-4034-4cf4-991b-3438f727ff54-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.719252 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0773bdc1-4034-4cf4-991b-3438f727ff54-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.744166 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=80.744143681 podStartE2EDuration="1m20.744143681s" podCreationTimestamp="2026-02-24 09:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:57:59.738912778 +0000 UTC m=+184.195435321" watchObservedRunningTime="2026-02-24 09:57:59.744143681 +0000 UTC m=+184.200666214" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.745234 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0773bdc1-4034-4cf4-991b-3438f727ff54-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5xj65\" (UID: \"0773bdc1-4034-4cf4-991b-3438f727ff54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.787472 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-85cjn" podStartSLOduration=131.787454001 podStartE2EDuration="2m11.787454001s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:57:59.787000586 +0000 UTC m=+184.243523139" watchObservedRunningTime="2026-02-24 09:57:59.787454001 +0000 UTC m=+184.243976554" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.828437 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=14.828422312 podStartE2EDuration="14.828422312s" podCreationTimestamp="2026-02-24 09:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:57:59.813731824 +0000 UTC m=+184.270254377" watchObservedRunningTime="2026-02-24 09:57:59.828422312 +0000 UTC m=+184.284944855" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.846820 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.846806704 podStartE2EDuration="46.846806704s" podCreationTimestamp="2026-02-24 09:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:57:59.828208015 +0000 UTC m=+184.284730578" watchObservedRunningTime="2026-02-24 09:57:59.846806704 +0000 UTC m=+184.303329247" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.847386 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.882150 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podStartSLOduration=132.882117237 podStartE2EDuration="2m12.882117237s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:57:59.881745505 +0000 UTC m=+184.338268048" watchObservedRunningTime="2026-02-24 09:57:59.882117237 +0000 UTC m=+184.338639780" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.896776 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bxllg" podStartSLOduration=132.896757964 podStartE2EDuration="2m12.896757964s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:57:59.895266054 +0000 UTC m=+184.351788597" watchObservedRunningTime="2026-02-24 09:57:59.896757964 +0000 UTC m=+184.353280507" Feb 24 09:57:59 crc kubenswrapper[4755]: I0224 09:57:59.942408 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-2cmfc" podStartSLOduration=132.94238683 podStartE2EDuration="2m12.94238683s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:57:59.941959216 +0000 UTC m=+184.398481779" watchObservedRunningTime="2026-02-24 09:57:59.94238683 +0000 UTC m=+184.398909373" Feb 24 09:58:00 crc kubenswrapper[4755]: I0224 09:58:00.294483 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" event={"ID":"0773bdc1-4034-4cf4-991b-3438f727ff54","Type":"ContainerStarted","Data":"eb7ebf80df2857b5b48507819a23b07f620158b720f5c47459f2810dc8efba6d"} Feb 24 09:58:00 crc kubenswrapper[4755]: I0224 09:58:00.294553 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" event={"ID":"0773bdc1-4034-4cf4-991b-3438f727ff54","Type":"ContainerStarted","Data":"01c8c3a5edf129f457559160b4f8a6175dc84da9023fa2148fe02ee48551e2d4"} Feb 24 09:58:00 crc kubenswrapper[4755]: I0224 09:58:00.342743 4755 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 09:58:00 crc kubenswrapper[4755]: I0224 09:58:00.353934 4755 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 09:58:01 crc kubenswrapper[4755]: I0224 09:58:01.316178 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:01 crc kubenswrapper[4755]: I0224 09:58:01.316242 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:01 crc kubenswrapper[4755]: I0224 09:58:01.316288 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:01 crc kubenswrapper[4755]: I0224 09:58:01.316260 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:01 crc kubenswrapper[4755]: E0224 09:58:01.316390 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:01 crc kubenswrapper[4755]: E0224 09:58:01.316524 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:01 crc kubenswrapper[4755]: E0224 09:58:01.316596 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:01 crc kubenswrapper[4755]: E0224 09:58:01.316666 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:01 crc kubenswrapper[4755]: E0224 09:58:01.470191 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:58:03 crc kubenswrapper[4755]: I0224 09:58:03.315608 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:03 crc kubenswrapper[4755]: I0224 09:58:03.315707 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:03 crc kubenswrapper[4755]: I0224 09:58:03.315838 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:03 crc kubenswrapper[4755]: E0224 09:58:03.315828 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:03 crc kubenswrapper[4755]: I0224 09:58:03.315740 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:03 crc kubenswrapper[4755]: E0224 09:58:03.316029 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:03 crc kubenswrapper[4755]: E0224 09:58:03.316183 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:03 crc kubenswrapper[4755]: E0224 09:58:03.316249 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:05 crc kubenswrapper[4755]: I0224 09:58:05.315556 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:05 crc kubenswrapper[4755]: I0224 09:58:05.315613 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:05 crc kubenswrapper[4755]: I0224 09:58:05.315635 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:05 crc kubenswrapper[4755]: E0224 09:58:05.315690 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:05 crc kubenswrapper[4755]: I0224 09:58:05.315709 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:05 crc kubenswrapper[4755]: E0224 09:58:05.315833 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:05 crc kubenswrapper[4755]: E0224 09:58:05.315905 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:05 crc kubenswrapper[4755]: E0224 09:58:05.315975 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:06 crc kubenswrapper[4755]: I0224 09:58:06.317980 4755 scope.go:117] "RemoveContainer" containerID="dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7" Feb 24 09:58:06 crc kubenswrapper[4755]: E0224 09:58:06.318300 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" Feb 24 09:58:06 crc kubenswrapper[4755]: E0224 09:58:06.471035 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:58:07 crc kubenswrapper[4755]: I0224 09:58:07.315717 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:07 crc kubenswrapper[4755]: I0224 09:58:07.315802 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:07 crc kubenswrapper[4755]: I0224 09:58:07.315893 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:07 crc kubenswrapper[4755]: E0224 09:58:07.315898 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:07 crc kubenswrapper[4755]: I0224 09:58:07.315963 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:07 crc kubenswrapper[4755]: E0224 09:58:07.316010 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:07 crc kubenswrapper[4755]: E0224 09:58:07.316253 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:07 crc kubenswrapper[4755]: E0224 09:58:07.316579 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:09 crc kubenswrapper[4755]: I0224 09:58:09.315606 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:09 crc kubenswrapper[4755]: I0224 09:58:09.315682 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:09 crc kubenswrapper[4755]: I0224 09:58:09.315735 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:09 crc kubenswrapper[4755]: E0224 09:58:09.315774 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:09 crc kubenswrapper[4755]: E0224 09:58:09.315882 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:09 crc kubenswrapper[4755]: I0224 09:58:09.315981 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:09 crc kubenswrapper[4755]: E0224 09:58:09.316314 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:09 crc kubenswrapper[4755]: E0224 09:58:09.317197 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:11 crc kubenswrapper[4755]: I0224 09:58:11.316142 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:11 crc kubenswrapper[4755]: I0224 09:58:11.316182 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:11 crc kubenswrapper[4755]: I0224 09:58:11.316557 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:11 crc kubenswrapper[4755]: I0224 09:58:11.316625 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:11 crc kubenswrapper[4755]: E0224 09:58:11.316771 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:11 crc kubenswrapper[4755]: E0224 09:58:11.316941 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:11 crc kubenswrapper[4755]: E0224 09:58:11.317044 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:11 crc kubenswrapper[4755]: E0224 09:58:11.317207 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:11 crc kubenswrapper[4755]: E0224 09:58:11.473256 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:58:13 crc kubenswrapper[4755]: I0224 09:58:13.315690 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:13 crc kubenswrapper[4755]: I0224 09:58:13.315743 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:13 crc kubenswrapper[4755]: I0224 09:58:13.315723 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:13 crc kubenswrapper[4755]: I0224 09:58:13.315692 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:13 crc kubenswrapper[4755]: E0224 09:58:13.315887 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:13 crc kubenswrapper[4755]: E0224 09:58:13.315992 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:13 crc kubenswrapper[4755]: E0224 09:58:13.316160 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:13 crc kubenswrapper[4755]: E0224 09:58:13.316289 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:15 crc kubenswrapper[4755]: I0224 09:58:15.315524 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:15 crc kubenswrapper[4755]: I0224 09:58:15.315637 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:15 crc kubenswrapper[4755]: E0224 09:58:15.315751 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:15 crc kubenswrapper[4755]: I0224 09:58:15.315771 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:15 crc kubenswrapper[4755]: I0224 09:58:15.315834 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:15 crc kubenswrapper[4755]: E0224 09:58:15.315961 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:15 crc kubenswrapper[4755]: E0224 09:58:15.316095 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:15 crc kubenswrapper[4755]: E0224 09:58:15.316184 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:16 crc kubenswrapper[4755]: E0224 09:58:16.474345 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:58:17 crc kubenswrapper[4755]: I0224 09:58:17.316178 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:17 crc kubenswrapper[4755]: I0224 09:58:17.316203 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:17 crc kubenswrapper[4755]: I0224 09:58:17.316331 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:17 crc kubenswrapper[4755]: I0224 09:58:17.316369 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:17 crc kubenswrapper[4755]: E0224 09:58:17.316937 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:17 crc kubenswrapper[4755]: E0224 09:58:17.317088 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:17 crc kubenswrapper[4755]: I0224 09:58:17.317496 4755 scope.go:117] "RemoveContainer" containerID="dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7" Feb 24 09:58:17 crc kubenswrapper[4755]: E0224 09:58:17.317337 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:17 crc kubenswrapper[4755]: E0224 09:58:17.317225 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:17 crc kubenswrapper[4755]: E0224 09:58:17.317800 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fljft_openshift-ovn-kubernetes(787109ef-edb9-4334-afc7-6197f57f444f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" Feb 24 09:58:19 crc kubenswrapper[4755]: I0224 09:58:19.315775 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:19 crc kubenswrapper[4755]: I0224 09:58:19.315922 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:19 crc kubenswrapper[4755]: I0224 09:58:19.315952 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:19 crc kubenswrapper[4755]: I0224 09:58:19.315912 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:19 crc kubenswrapper[4755]: E0224 09:58:19.316105 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:19 crc kubenswrapper[4755]: E0224 09:58:19.316407 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:19 crc kubenswrapper[4755]: E0224 09:58:19.316545 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:19 crc kubenswrapper[4755]: E0224 09:58:19.316661 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:21 crc kubenswrapper[4755]: I0224 09:58:21.315561 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:21 crc kubenswrapper[4755]: I0224 09:58:21.315605 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:21 crc kubenswrapper[4755]: I0224 09:58:21.315676 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:21 crc kubenswrapper[4755]: I0224 09:58:21.315588 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:21 crc kubenswrapper[4755]: E0224 09:58:21.316279 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:21 crc kubenswrapper[4755]: E0224 09:58:21.316712 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:21 crc kubenswrapper[4755]: E0224 09:58:21.316947 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:21 crc kubenswrapper[4755]: E0224 09:58:21.317041 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:21 crc kubenswrapper[4755]: I0224 09:58:21.368658 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dwm6v_79ca0953-3a40-45a2-9305-02272f036006/kube-multus/1.log" Feb 24 09:58:21 crc kubenswrapper[4755]: I0224 09:58:21.369340 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dwm6v_79ca0953-3a40-45a2-9305-02272f036006/kube-multus/0.log" Feb 24 09:58:21 crc kubenswrapper[4755]: I0224 09:58:21.369402 4755 generic.go:334] "Generic (PLEG): container finished" podID="79ca0953-3a40-45a2-9305-02272f036006" containerID="2b92711a65ad6cebe8ab7873f779cbee9056647a29d6fa95b4000b68ed70322b" exitCode=1 Feb 24 09:58:21 crc kubenswrapper[4755]: I0224 09:58:21.369439 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dwm6v" event={"ID":"79ca0953-3a40-45a2-9305-02272f036006","Type":"ContainerDied","Data":"2b92711a65ad6cebe8ab7873f779cbee9056647a29d6fa95b4000b68ed70322b"} Feb 24 09:58:21 crc kubenswrapper[4755]: I0224 09:58:21.369485 4755 scope.go:117] "RemoveContainer" containerID="66b3e89b5ca08155448cb882240e65ad4f1ef5e60fcba339bdc39b8c411bab61" Feb 24 09:58:21 crc kubenswrapper[4755]: I0224 09:58:21.370020 4755 scope.go:117] "RemoveContainer" containerID="2b92711a65ad6cebe8ab7873f779cbee9056647a29d6fa95b4000b68ed70322b" Feb 24 09:58:21 crc kubenswrapper[4755]: E0224 09:58:21.370505 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-dwm6v_openshift-multus(79ca0953-3a40-45a2-9305-02272f036006)\"" pod="openshift-multus/multus-dwm6v" podUID="79ca0953-3a40-45a2-9305-02272f036006" Feb 24 09:58:21 crc kubenswrapper[4755]: I0224 09:58:21.397725 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xj65" podStartSLOduration=154.39769655 podStartE2EDuration="2m34.39769655s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:00.318733257 +0000 UTC m=+184.775255840" watchObservedRunningTime="2026-02-24 09:58:21.39769655 +0000 UTC m=+205.854219123" Feb 24 09:58:21 crc kubenswrapper[4755]: E0224 09:58:21.476125 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:58:22 crc kubenswrapper[4755]: I0224 09:58:22.374377 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dwm6v_79ca0953-3a40-45a2-9305-02272f036006/kube-multus/1.log" Feb 24 09:58:23 crc kubenswrapper[4755]: I0224 09:58:23.315518 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:23 crc kubenswrapper[4755]: I0224 09:58:23.315583 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:23 crc kubenswrapper[4755]: I0224 09:58:23.315553 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:23 crc kubenswrapper[4755]: I0224 09:58:23.315518 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:23 crc kubenswrapper[4755]: E0224 09:58:23.315738 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:23 crc kubenswrapper[4755]: E0224 09:58:23.315817 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:23 crc kubenswrapper[4755]: E0224 09:58:23.316032 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:23 crc kubenswrapper[4755]: E0224 09:58:23.316150 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:25 crc kubenswrapper[4755]: I0224 09:58:25.315320 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:25 crc kubenswrapper[4755]: I0224 09:58:25.315440 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:25 crc kubenswrapper[4755]: E0224 09:58:25.315480 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:25 crc kubenswrapper[4755]: I0224 09:58:25.315321 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:25 crc kubenswrapper[4755]: I0224 09:58:25.315346 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:25 crc kubenswrapper[4755]: E0224 09:58:25.315662 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:25 crc kubenswrapper[4755]: E0224 09:58:25.315725 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:25 crc kubenswrapper[4755]: E0224 09:58:25.315806 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:26 crc kubenswrapper[4755]: E0224 09:58:26.477244 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:58:27 crc kubenswrapper[4755]: I0224 09:58:27.315651 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:27 crc kubenswrapper[4755]: I0224 09:58:27.315728 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:27 crc kubenswrapper[4755]: I0224 09:58:27.315870 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:27 crc kubenswrapper[4755]: E0224 09:58:27.316030 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:27 crc kubenswrapper[4755]: E0224 09:58:27.316419 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:27 crc kubenswrapper[4755]: E0224 09:58:27.316698 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:27 crc kubenswrapper[4755]: I0224 09:58:27.317019 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:27 crc kubenswrapper[4755]: E0224 09:58:27.317200 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:29 crc kubenswrapper[4755]: I0224 09:58:29.316300 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:29 crc kubenswrapper[4755]: I0224 09:58:29.316429 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:29 crc kubenswrapper[4755]: I0224 09:58:29.316325 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:29 crc kubenswrapper[4755]: I0224 09:58:29.316325 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.316554 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.316695 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.316881 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.317102 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:29 crc kubenswrapper[4755]: I0224 09:58:29.363380 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:29 crc kubenswrapper[4755]: I0224 09:58:29.363558 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:29 crc kubenswrapper[4755]: I0224 09:58:29.363648 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.363720 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 10:00:31.363671476 +0000 UTC m=+335.820194019 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.363796 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.363883 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:58:29 crc kubenswrapper[4755]: I0224 09:58:29.363815 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.363938 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 10:00:31.363904324 +0000 UTC m=+335.820426897 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.363947 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.363968 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.363968 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.364029 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 10:00:31.364020708 +0000 UTC m=+335.820543251 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.364137 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 10:00:31.36410835 +0000 UTC m=+335.820630933 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:58:29 crc kubenswrapper[4755]: I0224 09:58:29.465696 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:29 crc kubenswrapper[4755]: I0224 09:58:29.465801 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.465992 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.465992 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.466174 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs podName:82775556-3991-45ab-ac50-7ef81cafeaee nodeName:}" failed. No retries permitted until 2026-02-24 10:00:31.466139974 +0000 UTC m=+335.922662557 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs") pod "network-metrics-daemon-98t22" (UID: "82775556-3991-45ab-ac50-7ef81cafeaee") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.466019 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.466226 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:58:29 crc kubenswrapper[4755]: E0224 09:58:29.466305 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 10:00:31.466282188 +0000 UTC m=+335.922804761 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:58:31 crc kubenswrapper[4755]: I0224 09:58:31.316251 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:31 crc kubenswrapper[4755]: I0224 09:58:31.316508 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:31 crc kubenswrapper[4755]: I0224 09:58:31.316660 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:31 crc kubenswrapper[4755]: E0224 09:58:31.316668 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:31 crc kubenswrapper[4755]: I0224 09:58:31.316727 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:31 crc kubenswrapper[4755]: E0224 09:58:31.316935 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:31 crc kubenswrapper[4755]: E0224 09:58:31.317238 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:31 crc kubenswrapper[4755]: E0224 09:58:31.317357 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:31 crc kubenswrapper[4755]: E0224 09:58:31.478582 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 09:58:32 crc kubenswrapper[4755]: I0224 09:58:32.316647 4755 scope.go:117] "RemoveContainer" containerID="dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7" Feb 24 09:58:33 crc kubenswrapper[4755]: I0224 09:58:33.273225 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-98t22"] Feb 24 09:58:33 crc kubenswrapper[4755]: I0224 09:58:33.273348 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:33 crc kubenswrapper[4755]: E0224 09:58:33.273477 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:33 crc kubenswrapper[4755]: I0224 09:58:33.316156 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:33 crc kubenswrapper[4755]: I0224 09:58:33.316199 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:33 crc kubenswrapper[4755]: I0224 09:58:33.316184 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:33 crc kubenswrapper[4755]: E0224 09:58:33.316502 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:33 crc kubenswrapper[4755]: E0224 09:58:33.316582 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:33 crc kubenswrapper[4755]: E0224 09:58:33.316783 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:33 crc kubenswrapper[4755]: I0224 09:58:33.316945 4755 scope.go:117] "RemoveContainer" containerID="2b92711a65ad6cebe8ab7873f779cbee9056647a29d6fa95b4000b68ed70322b" Feb 24 09:58:33 crc kubenswrapper[4755]: I0224 09:58:33.417937 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/3.log" Feb 24 09:58:33 crc kubenswrapper[4755]: I0224 09:58:33.422701 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerStarted","Data":"10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c"} Feb 24 09:58:33 crc kubenswrapper[4755]: I0224 09:58:33.423183 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:58:33 crc kubenswrapper[4755]: I0224 09:58:33.427660 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dwm6v_79ca0953-3a40-45a2-9305-02272f036006/kube-multus/1.log" Feb 24 09:58:33 crc kubenswrapper[4755]: I0224 09:58:33.427722 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dwm6v" event={"ID":"79ca0953-3a40-45a2-9305-02272f036006","Type":"ContainerStarted","Data":"1c64c7bea57cafc2c6ca12a0770020f27ecb2ee1b557ba321c9d9cc8c526b2b7"} Feb 24 09:58:33 crc kubenswrapper[4755]: I0224 09:58:33.463964 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podStartSLOduration=166.463943898 podStartE2EDuration="2m46.463943898s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:33.462685798 +0000 UTC m=+217.919208371" watchObservedRunningTime="2026-02-24 09:58:33.463943898 +0000 UTC m=+217.920466451" Feb 24 09:58:35 crc kubenswrapper[4755]: I0224 09:58:35.316539 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:35 crc kubenswrapper[4755]: I0224 09:58:35.316539 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:35 crc kubenswrapper[4755]: I0224 09:58:35.316707 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:35 crc kubenswrapper[4755]: E0224 09:58:35.316923 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:58:35 crc kubenswrapper[4755]: E0224 09:58:35.317145 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:58:35 crc kubenswrapper[4755]: E0224 09:58:35.317314 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:58:35 crc kubenswrapper[4755]: I0224 09:58:35.317559 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:35 crc kubenswrapper[4755]: E0224 09:58:35.317698 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98t22" podUID="82775556-3991-45ab-ac50-7ef81cafeaee" Feb 24 09:58:37 crc kubenswrapper[4755]: I0224 09:58:37.315237 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:58:37 crc kubenswrapper[4755]: I0224 09:58:37.315288 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:58:37 crc kubenswrapper[4755]: I0224 09:58:37.315342 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 09:58:37 crc kubenswrapper[4755]: I0224 09:58:37.315377 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:58:37 crc kubenswrapper[4755]: I0224 09:58:37.318195 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 09:58:37 crc kubenswrapper[4755]: I0224 09:58:37.318908 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 24 09:58:37 crc kubenswrapper[4755]: I0224 09:58:37.319116 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 24 09:58:37 crc kubenswrapper[4755]: I0224 09:58:37.319258 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 09:58:37 crc kubenswrapper[4755]: I0224 09:58:37.319516 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 24 09:58:37 crc kubenswrapper[4755]: I0224 09:58:37.322051 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.425510 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.478656 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d9fml"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.479345 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.481986 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.482666 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.483116 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.483744 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.488404 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.489214 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.489604 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.489954 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.490609 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.490703 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.492188 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.494033 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-w7mwr"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.494749 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.494783 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.495094 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.495551 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.495871 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.496125 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.496389 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.496479 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.496923 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.497091 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.499171 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.500695 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.501351 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.501691 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.502044 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.513141 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xpvsw"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.514589 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.520572 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wrwg8"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.523570 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.523935 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.525754 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.526233 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.526428 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.526678 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.526821 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.527058 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.527267 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.527434 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.527470 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.527631 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.528011 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.528561 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.537464 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.543469 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.554820 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.555996 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.556738 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.557442 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.559510 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.560324 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.560553 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.561315 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.561865 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.562129 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.562162 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.562261 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.562366 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.562543 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.562786 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.563886 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8n9ts"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.564462 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.564842 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.565168 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.571140 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.571173 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.571441 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.571593 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.571933 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.574332 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.574514 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-srwxs"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.574710 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.575268 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.580692 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.584974 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.591086 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-vhlbl"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.591781 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-smsp7"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.592275 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fqscc"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.592783 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.593302 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-vhlbl" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.593601 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.594733 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7tncz"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595329 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595352 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595390 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c817e6-69d9-401d-86a9-3f52ac3bd891-serving-cert\") pod \"openshift-config-operator-7777fb866f-cq9sm\" (UID: \"22c817e6-69d9-401d-86a9-3f52ac3bd891\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595417 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpz6w\" (UniqueName: \"kubernetes.io/projected/c4ab2300-1f56-4c83-a156-78efcd90937a-kube-api-access-lpz6w\") pod \"machine-api-operator-5694c8668f-w7mwr\" (UID: \"c4ab2300-1f56-4c83-a156-78efcd90937a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595440 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-etcd-client\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595463 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7aedb363-fae6-47ca-8a07-5cd884f8ad8c-auth-proxy-config\") pod \"machine-approver-56656f9798-8mtqp\" (UID: \"7aedb363-fae6-47ca-8a07-5cd884f8ad8c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595483 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4ab2300-1f56-4c83-a156-78efcd90937a-images\") pod \"machine-api-operator-5694c8668f-w7mwr\" (UID: \"c4ab2300-1f56-4c83-a156-78efcd90937a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595504 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-etcd-serving-ca\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595524 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28cbd592-dd09-4070-b22e-f67f1d14dde2-client-ca\") pod \"route-controller-manager-6576b87f9c-2g2cw\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595558 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7aedb363-fae6-47ca-8a07-5cd884f8ad8c-machine-approver-tls\") pod \"machine-approver-56656f9798-8mtqp\" (UID: \"7aedb363-fae6-47ca-8a07-5cd884f8ad8c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595569 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595580 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/22c817e6-69d9-401d-86a9-3f52ac3bd891-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cq9sm\" (UID: \"22c817e6-69d9-401d-86a9-3f52ac3bd891\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595604 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gnrl\" (UniqueName: \"kubernetes.io/projected/28cbd592-dd09-4070-b22e-f67f1d14dde2-kube-api-access-2gnrl\") pod \"route-controller-manager-6576b87f9c-2g2cw\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595628 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-node-pullsecrets\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595659 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/914080f2-11d8-46d8-8ec3-fe3bebb059b6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-49z4j\" (UID: \"914080f2-11d8-46d8-8ec3-fe3bebb059b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595681 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mqxt\" (UniqueName: \"kubernetes.io/projected/22c817e6-69d9-401d-86a9-3f52ac3bd891-kube-api-access-7mqxt\") pod \"openshift-config-operator-7777fb866f-cq9sm\" (UID: \"22c817e6-69d9-401d-86a9-3f52ac3bd891\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595702 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4ab2300-1f56-4c83-a156-78efcd90937a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-w7mwr\" (UID: \"c4ab2300-1f56-4c83-a156-78efcd90937a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595727 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfcss\" (UniqueName: \"kubernetes.io/projected/2d61c075-3ea1-4130-bcab-a207ea44a31a-kube-api-access-kfcss\") pod \"console-operator-58897d9998-wrwg8\" (UID: \"2d61c075-3ea1-4130-bcab-a207ea44a31a\") " pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595750 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595770 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-config\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595802 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-audit-dir\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595826 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94cm8\" (UniqueName: \"kubernetes.io/projected/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-kube-api-access-94cm8\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595845 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.595847 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-audit-policies\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.596307 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw57c\" (UniqueName: \"kubernetes.io/projected/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-kube-api-access-xw57c\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.596345 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6twsr\" (UniqueName: \"kubernetes.io/projected/7aedb363-fae6-47ca-8a07-5cd884f8ad8c-kube-api-access-6twsr\") pod \"machine-approver-56656f9798-8mtqp\" (UID: \"7aedb363-fae6-47ca-8a07-5cd884f8ad8c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.596368 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74cdp\" (UniqueName: \"kubernetes.io/projected/ae717194-7e0e-4cfb-b662-88c914e8c670-kube-api-access-74cdp\") pod \"cluster-samples-operator-665b6dd947-t522d\" (UID: \"ae717194-7e0e-4cfb-b662-88c914e8c670\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.596386 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-etcd-client\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.596402 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-image-import-ca\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.596423 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.596442 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-audit-dir\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.596462 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28cbd592-dd09-4070-b22e-f67f1d14dde2-serving-cert\") pod \"route-controller-manager-6576b87f9c-2g2cw\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.596467 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-jb8zb"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.596993 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.596482 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d61c075-3ea1-4130-bcab-a207ea44a31a-serving-cert\") pod \"console-operator-58897d9998-wrwg8\" (UID: \"2d61c075-3ea1-4130-bcab-a207ea44a31a\") " pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.597286 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-encryption-config\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.597317 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.597360 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aedb363-fae6-47ca-8a07-5cd884f8ad8c-config\") pod \"machine-approver-56656f9798-8mtqp\" (UID: \"7aedb363-fae6-47ca-8a07-5cd884f8ad8c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.597387 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-encryption-config\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.597519 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-client-ca\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.597658 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae717194-7e0e-4cfb-b662-88c914e8c670-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t522d\" (UID: \"ae717194-7e0e-4cfb-b662-88c914e8c670\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.597691 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d61c075-3ea1-4130-bcab-a207ea44a31a-config\") pod \"console-operator-58897d9998-wrwg8\" (UID: \"2d61c075-3ea1-4130-bcab-a207ea44a31a\") " pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.597712 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-serving-cert\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.597749 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkrrm\" (UniqueName: \"kubernetes.io/projected/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-kube-api-access-mkrrm\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.597793 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/914080f2-11d8-46d8-8ec3-fe3bebb059b6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-49z4j\" (UID: \"914080f2-11d8-46d8-8ec3-fe3bebb059b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.597877 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-audit\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.597905 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28cbd592-dd09-4070-b22e-f67f1d14dde2-config\") pod \"route-controller-manager-6576b87f9c-2g2cw\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.597980 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc48z\" (UniqueName: \"kubernetes.io/projected/914080f2-11d8-46d8-8ec3-fe3bebb059b6-kube-api-access-rc48z\") pod \"openshift-apiserver-operator-796bbdcf4f-49z4j\" (UID: \"914080f2-11d8-46d8-8ec3-fe3bebb059b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.598050 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-serving-cert\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.598099 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4ab2300-1f56-4c83-a156-78efcd90937a-config\") pod \"machine-api-operator-5694c8668f-w7mwr\" (UID: \"c4ab2300-1f56-4c83-a156-78efcd90937a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.598148 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-config\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.598734 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d61c075-3ea1-4130-bcab-a207ea44a31a-trusted-ca\") pod \"console-operator-58897d9998-wrwg8\" (UID: \"2d61c075-3ea1-4130-bcab-a207ea44a31a\") " pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.600162 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-serving-cert\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.606429 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.606688 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.606781 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.606912 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.606977 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.607091 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.607125 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.607213 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.607261 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.607362 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.607484 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.607734 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.607989 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.608247 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.608405 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.608510 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.608614 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.608644 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.608618 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.608722 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.608974 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.609407 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.609588 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.609866 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.619945 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.620254 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.620302 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.620323 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.621132 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.623052 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kksxb"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.631203 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.632698 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.634559 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.634773 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kksxb" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.634834 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.636119 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.636611 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.637238 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.637437 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.638292 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.638462 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.638760 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.639103 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.638847 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.639152 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.639365 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.640186 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.649903 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.650288 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.653228 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.653308 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l7frg"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.654152 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.654768 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.654782 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-l7frg" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.655003 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.655367 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.655585 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.656552 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wfrxf"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.658024 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.659200 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.659278 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.660223 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wfrxf" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.660266 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.660814 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.661021 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.661425 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.662139 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.663810 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.664434 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.664951 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d9fml"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.665058 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.665344 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.665739 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.666132 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.666179 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.666715 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4n9rx"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.667036 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4n9rx" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.667762 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.668869 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.669055 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.669423 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.670243 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.670604 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.672843 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t4p5v"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.673412 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t4p5v" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.675265 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.677311 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.678004 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.678209 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-62kx9"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.680331 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-62kx9" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.682216 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.689803 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tp8zp"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.691430 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.695302 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.697676 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.698695 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wrwg8"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.702586 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.703479 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aedb363-fae6-47ca-8a07-5cd884f8ad8c-config\") pod \"machine-approver-56656f9798-8mtqp\" (UID: \"7aedb363-fae6-47ca-8a07-5cd884f8ad8c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.703560 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-encryption-config\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.703604 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-client-ca\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.703624 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae717194-7e0e-4cfb-b662-88c914e8c670-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t522d\" (UID: \"ae717194-7e0e-4cfb-b662-88c914e8c670\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.703641 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d61c075-3ea1-4130-bcab-a207ea44a31a-config\") pod \"console-operator-58897d9998-wrwg8\" (UID: \"2d61c075-3ea1-4130-bcab-a207ea44a31a\") " pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.703657 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-serving-cert\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.703675 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkrrm\" (UniqueName: \"kubernetes.io/projected/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-kube-api-access-mkrrm\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.703693 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/914080f2-11d8-46d8-8ec3-fe3bebb059b6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-49z4j\" (UID: \"914080f2-11d8-46d8-8ec3-fe3bebb059b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.703712 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-audit\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.703730 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28cbd592-dd09-4070-b22e-f67f1d14dde2-config\") pod \"route-controller-manager-6576b87f9c-2g2cw\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.703748 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc48z\" (UniqueName: \"kubernetes.io/projected/914080f2-11d8-46d8-8ec3-fe3bebb059b6-kube-api-access-rc48z\") pod \"openshift-apiserver-operator-796bbdcf4f-49z4j\" (UID: \"914080f2-11d8-46d8-8ec3-fe3bebb059b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.703772 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-serving-cert\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.704211 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4ab2300-1f56-4c83-a156-78efcd90937a-config\") pod \"machine-api-operator-5694c8668f-w7mwr\" (UID: \"c4ab2300-1f56-4c83-a156-78efcd90937a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.704240 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-config\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.704273 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d61c075-3ea1-4130-bcab-a207ea44a31a-trusted-ca\") pod \"console-operator-58897d9998-wrwg8\" (UID: \"2d61c075-3ea1-4130-bcab-a207ea44a31a\") " pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.704293 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-serving-cert\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.704314 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724124 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c817e6-69d9-401d-86a9-3f52ac3bd891-serving-cert\") pod \"openshift-config-operator-7777fb866f-cq9sm\" (UID: \"22c817e6-69d9-401d-86a9-3f52ac3bd891\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724167 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpz6w\" (UniqueName: \"kubernetes.io/projected/c4ab2300-1f56-4c83-a156-78efcd90937a-kube-api-access-lpz6w\") pod \"machine-api-operator-5694c8668f-w7mwr\" (UID: \"c4ab2300-1f56-4c83-a156-78efcd90937a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724186 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-etcd-client\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724202 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-etcd-serving-ca\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724220 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7aedb363-fae6-47ca-8a07-5cd884f8ad8c-auth-proxy-config\") pod \"machine-approver-56656f9798-8mtqp\" (UID: \"7aedb363-fae6-47ca-8a07-5cd884f8ad8c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724243 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4ab2300-1f56-4c83-a156-78efcd90937a-images\") pod \"machine-api-operator-5694c8668f-w7mwr\" (UID: \"c4ab2300-1f56-4c83-a156-78efcd90937a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724259 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/22c817e6-69d9-401d-86a9-3f52ac3bd891-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cq9sm\" (UID: \"22c817e6-69d9-401d-86a9-3f52ac3bd891\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724277 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28cbd592-dd09-4070-b22e-f67f1d14dde2-client-ca\") pod \"route-controller-manager-6576b87f9c-2g2cw\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724305 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7aedb363-fae6-47ca-8a07-5cd884f8ad8c-machine-approver-tls\") pod \"machine-approver-56656f9798-8mtqp\" (UID: \"7aedb363-fae6-47ca-8a07-5cd884f8ad8c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724321 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gnrl\" (UniqueName: \"kubernetes.io/projected/28cbd592-dd09-4070-b22e-f67f1d14dde2-kube-api-access-2gnrl\") pod \"route-controller-manager-6576b87f9c-2g2cw\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724340 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-node-pullsecrets\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724365 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/914080f2-11d8-46d8-8ec3-fe3bebb059b6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-49z4j\" (UID: \"914080f2-11d8-46d8-8ec3-fe3bebb059b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724379 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mqxt\" (UniqueName: \"kubernetes.io/projected/22c817e6-69d9-401d-86a9-3f52ac3bd891-kube-api-access-7mqxt\") pod \"openshift-config-operator-7777fb866f-cq9sm\" (UID: \"22c817e6-69d9-401d-86a9-3f52ac3bd891\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724395 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4ab2300-1f56-4c83-a156-78efcd90937a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-w7mwr\" (UID: \"c4ab2300-1f56-4c83-a156-78efcd90937a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724412 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfcss\" (UniqueName: \"kubernetes.io/projected/2d61c075-3ea1-4130-bcab-a207ea44a31a-kube-api-access-kfcss\") pod \"console-operator-58897d9998-wrwg8\" (UID: \"2d61c075-3ea1-4130-bcab-a207ea44a31a\") " pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724428 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724444 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-config\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724458 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94cm8\" (UniqueName: \"kubernetes.io/projected/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-kube-api-access-94cm8\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724483 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-audit-dir\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724503 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-audit-policies\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724521 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw57c\" (UniqueName: \"kubernetes.io/projected/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-kube-api-access-xw57c\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724539 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6twsr\" (UniqueName: \"kubernetes.io/projected/7aedb363-fae6-47ca-8a07-5cd884f8ad8c-kube-api-access-6twsr\") pod \"machine-approver-56656f9798-8mtqp\" (UID: \"7aedb363-fae6-47ca-8a07-5cd884f8ad8c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724558 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74cdp\" (UniqueName: \"kubernetes.io/projected/ae717194-7e0e-4cfb-b662-88c914e8c670-kube-api-access-74cdp\") pod \"cluster-samples-operator-665b6dd947-t522d\" (UID: \"ae717194-7e0e-4cfb-b662-88c914e8c670\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724574 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-etcd-client\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724591 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-image-import-ca\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724609 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724624 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-audit-dir\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724639 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724653 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28cbd592-dd09-4070-b22e-f67f1d14dde2-serving-cert\") pod \"route-controller-manager-6576b87f9c-2g2cw\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724671 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d61c075-3ea1-4130-bcab-a207ea44a31a-serving-cert\") pod \"console-operator-58897d9998-wrwg8\" (UID: \"2d61c075-3ea1-4130-bcab-a207ea44a31a\") " pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.724685 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-encryption-config\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.704935 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aedb363-fae6-47ca-8a07-5cd884f8ad8c-config\") pod \"machine-approver-56656f9798-8mtqp\" (UID: \"7aedb363-fae6-47ca-8a07-5cd884f8ad8c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.711077 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d61c075-3ea1-4130-bcab-a207ea44a31a-trusted-ca\") pod \"console-operator-58897d9998-wrwg8\" (UID: \"2d61c075-3ea1-4130-bcab-a207ea44a31a\") " pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.706375 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-client-ca\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.707734 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lj8d5"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.725596 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-audit-dir\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.712656 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-config\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726618 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726655 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726669 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726681 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726694 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-vhlbl"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726706 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726716 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726727 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726739 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fqscc"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726750 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-w7mwr"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726762 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8n9ts"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726772 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wfrxf"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726782 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-srwxs"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726794 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7tncz"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726804 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4n9rx"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726814 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kksxb"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726824 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.726911 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lj8d5" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.727332 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.727390 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.727403 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-image-import-ca\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.727456 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-audit-dir\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.727731 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.713318 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-encryption-config\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.727773 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.717045 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-serving-cert\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.727889 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-audit-policies\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.728081 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c817e6-69d9-401d-86a9-3f52ac3bd891-serving-cert\") pod \"openshift-config-operator-7777fb866f-cq9sm\" (UID: \"22c817e6-69d9-401d-86a9-3f52ac3bd891\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.709660 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-audit\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.708379 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28cbd592-dd09-4070-b22e-f67f1d14dde2-config\") pod \"route-controller-manager-6576b87f9c-2g2cw\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.728897 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28cbd592-dd09-4070-b22e-f67f1d14dde2-client-ca\") pod \"route-controller-manager-6576b87f9c-2g2cw\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.728921 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-config\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.721299 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.709280 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d61c075-3ea1-4130-bcab-a207ea44a31a-config\") pod \"console-operator-58897d9998-wrwg8\" (UID: \"2d61c075-3ea1-4130-bcab-a207ea44a31a\") " pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.729160 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t4p5v"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.729501 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-etcd-serving-ca\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.710622 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/914080f2-11d8-46d8-8ec3-fe3bebb059b6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-49z4j\" (UID: \"914080f2-11d8-46d8-8ec3-fe3bebb059b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.710137 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae717194-7e0e-4cfb-b662-88c914e8c670-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t522d\" (UID: \"ae717194-7e0e-4cfb-b662-88c914e8c670\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.729686 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7aedb363-fae6-47ca-8a07-5cd884f8ad8c-auth-proxy-config\") pod \"machine-approver-56656f9798-8mtqp\" (UID: \"7aedb363-fae6-47ca-8a07-5cd884f8ad8c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.729733 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4ab2300-1f56-4c83-a156-78efcd90937a-images\") pod \"machine-api-operator-5694c8668f-w7mwr\" (UID: \"c4ab2300-1f56-4c83-a156-78efcd90937a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.709999 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-serving-cert\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.710081 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4ab2300-1f56-4c83-a156-78efcd90937a-config\") pod \"machine-api-operator-5694c8668f-w7mwr\" (UID: \"c4ab2300-1f56-4c83-a156-78efcd90937a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.729996 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/22c817e6-69d9-401d-86a9-3f52ac3bd891-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cq9sm\" (UID: \"22c817e6-69d9-401d-86a9-3f52ac3bd891\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.730083 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xpvsw"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.730273 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/914080f2-11d8-46d8-8ec3-fe3bebb059b6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-49z4j\" (UID: \"914080f2-11d8-46d8-8ec3-fe3bebb059b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.730324 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-node-pullsecrets\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.711034 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-serving-cert\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.710585 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.731267 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.732006 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-encryption-config\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.732423 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.732789 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-etcd-client\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.732879 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-etcd-client\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.732931 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d61c075-3ea1-4130-bcab-a207ea44a31a-serving-cert\") pod \"console-operator-58897d9998-wrwg8\" (UID: \"2d61c075-3ea1-4130-bcab-a207ea44a31a\") " pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.733029 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7aedb363-fae6-47ca-8a07-5cd884f8ad8c-machine-approver-tls\") pod \"machine-approver-56656f9798-8mtqp\" (UID: \"7aedb363-fae6-47ca-8a07-5cd884f8ad8c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.734122 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l7frg"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.734369 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4ab2300-1f56-4c83-a156-78efcd90937a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-w7mwr\" (UID: \"c4ab2300-1f56-4c83-a156-78efcd90937a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.735356 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.735363 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28cbd592-dd09-4070-b22e-f67f1d14dde2-serving-cert\") pod \"route-controller-manager-6576b87f9c-2g2cw\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.735681 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.736642 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.738108 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tp8zp"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.738762 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zmmhb"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.740117 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.740221 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.741044 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.742120 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-smsp7"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.743116 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.743125 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.744127 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.745025 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lj8d5"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.749690 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-dgmbb"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.752200 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dgmbb" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.754136 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-62kx9"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.755639 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zmmhb"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.756985 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dgmbb"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.759104 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5brsg"] Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.759829 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5brsg" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.761375 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.781798 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.801302 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.822053 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.841025 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.860809 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.881679 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.902779 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.921422 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.941919 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.963164 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 09:58:40 crc kubenswrapper[4755]: I0224 09:58:40.982746 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.001737 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.022180 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.042116 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.061550 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.102879 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.122891 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.142617 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.161312 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.183282 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.203251 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.223135 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.242485 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.262259 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.282411 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.301830 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.332204 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.341870 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.361891 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.382194 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.402893 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.422095 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.444729 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.461893 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.484305 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.502682 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.522994 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.542049 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.562419 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.582859 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.601744 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.622197 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.641963 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.661536 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.680447 4755 request.go:700] Waited for 1.013263296s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&limit=500&resourceVersion=0 Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.682378 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.702428 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.741996 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.762634 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.784475 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.802641 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.822341 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.842494 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.862026 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.881808 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.902583 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.922425 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.942525 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.962110 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 09:58:41 crc kubenswrapper[4755]: I0224 09:58:41.982211 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.001227 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.022310 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.041487 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.062181 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.082824 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.101960 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.131614 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.142530 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.162146 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.183000 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.202882 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.222900 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.242672 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.262244 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.282664 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.335555 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc48z\" (UniqueName: \"kubernetes.io/projected/914080f2-11d8-46d8-8ec3-fe3bebb059b6-kube-api-access-rc48z\") pod \"openshift-apiserver-operator-796bbdcf4f-49z4j\" (UID: \"914080f2-11d8-46d8-8ec3-fe3bebb059b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.352988 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkrrm\" (UniqueName: \"kubernetes.io/projected/40eaba2b-f4b1-449d-84d2-b72c1c2b67b0-kube-api-access-mkrrm\") pod \"apiserver-7bbb656c7d-fqqc4\" (UID: \"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.377536 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpz6w\" (UniqueName: \"kubernetes.io/projected/c4ab2300-1f56-4c83-a156-78efcd90937a-kube-api-access-lpz6w\") pod \"machine-api-operator-5694c8668f-w7mwr\" (UID: \"c4ab2300-1f56-4c83-a156-78efcd90937a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.381712 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.386060 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.396134 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mqxt\" (UniqueName: \"kubernetes.io/projected/22c817e6-69d9-401d-86a9-3f52ac3bd891-kube-api-access-7mqxt\") pod \"openshift-config-operator-7777fb866f-cq9sm\" (UID: \"22c817e6-69d9-401d-86a9-3f52ac3bd891\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.421287 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.423757 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6twsr\" (UniqueName: \"kubernetes.io/projected/7aedb363-fae6-47ca-8a07-5cd884f8ad8c-kube-api-access-6twsr\") pod \"machine-approver-56656f9798-8mtqp\" (UID: \"7aedb363-fae6-47ca-8a07-5cd884f8ad8c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.438549 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.444614 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw57c\" (UniqueName: \"kubernetes.io/projected/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-kube-api-access-xw57c\") pod \"controller-manager-879f6c89f-d9fml\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.476648 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfcss\" (UniqueName: \"kubernetes.io/projected/2d61c075-3ea1-4130-bcab-a207ea44a31a-kube-api-access-kfcss\") pod \"console-operator-58897d9998-wrwg8\" (UID: \"2d61c075-3ea1-4130-bcab-a207ea44a31a\") " pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.488100 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74cdp\" (UniqueName: \"kubernetes.io/projected/ae717194-7e0e-4cfb-b662-88c914e8c670-kube-api-access-74cdp\") pod \"cluster-samples-operator-665b6dd947-t522d\" (UID: \"ae717194-7e0e-4cfb-b662-88c914e8c670\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.497326 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.501481 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.510246 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94cm8\" (UniqueName: \"kubernetes.io/projected/b7bace2b-7d6c-4332-b9d3-a1848568dfb0-kube-api-access-94cm8\") pod \"apiserver-76f77b778f-xpvsw\" (UID: \"b7bace2b-7d6c-4332-b9d3-a1848568dfb0\") " pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.510613 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.524028 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.556453 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gnrl\" (UniqueName: \"kubernetes.io/projected/28cbd592-dd09-4070-b22e-f67f1d14dde2-kube-api-access-2gnrl\") pod \"route-controller-manager-6576b87f9c-2g2cw\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.562378 4755 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.582759 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.601507 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.609135 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.623613 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.645235 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.648525 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.663943 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.680512 4755 request.go:700] Waited for 1.927958829s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.683246 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.696322 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4"] Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.701753 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.702793 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d" Feb 24 09:58:42 crc kubenswrapper[4755]: W0224 09:58:42.703916 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40eaba2b_f4b1_449d_84d2_b72c1c2b67b0.slice/crio-530dfe18c3510becba4ee55080a6f08f76195b41cb533dc254f32374ce8db250 WatchSource:0}: Error finding container 530dfe18c3510becba4ee55080a6f08f76195b41cb533dc254f32374ce8db250: Status 404 returned error can't find the container with id 530dfe18c3510becba4ee55080a6f08f76195b41cb533dc254f32374ce8db250 Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.720837 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.734225 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm"] Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.741442 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 09:58:42 crc kubenswrapper[4755]: W0224 09:58:42.751414 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22c817e6_69d9_401d_86a9_3f52ac3bd891.slice/crio-745eb8af997001c074cdf0e09086a9cfa68a471f4402b9d8dda5f8880a089986 WatchSource:0}: Error finding container 745eb8af997001c074cdf0e09086a9cfa68a471f4402b9d8dda5f8880a089986: Status 404 returned error can't find the container with id 745eb8af997001c074cdf0e09086a9cfa68a471f4402b9d8dda5f8880a089986 Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.757610 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.772542 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.780090 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d9fml"] Feb 24 09:58:42 crc kubenswrapper[4755]: W0224 09:58:42.817711 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6fd9421_d674_405c_a6c8_f25ff3c2f9f7.slice/crio-0f28b50851621089380b290d71906301e0b1c815868a2e3d9b7b09ee5f446140 WatchSource:0}: Error finding container 0f28b50851621089380b290d71906301e0b1c815868a2e3d9b7b09ee5f446140: Status 404 returned error can't find the container with id 0f28b50851621089380b290d71906301e0b1c815868a2e3d9b7b09ee5f446140 Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.837841 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j"] Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855476 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855513 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqq59\" (UniqueName: \"kubernetes.io/projected/f70c68fa-4429-40ac-9d4a-72f5b5086c94-kube-api-access-rqq59\") pod \"package-server-manager-789f6589d5-tft86\" (UID: \"f70c68fa-4429-40ac-9d4a-72f5b5086c94\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855536 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28ea82ea-bfeb-4a67-bac4-a97156c7995b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-m78zl\" (UID: \"28ea82ea-bfeb-4a67-bac4-a97156c7995b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855554 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b75345bf-b93f-471f-9b11-e5c5695e7e6a-stats-auth\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855569 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sprnm\" (UniqueName: \"kubernetes.io/projected/28ea82ea-bfeb-4a67-bac4-a97156c7995b-kube-api-access-sprnm\") pod \"cluster-image-registry-operator-dc59b4c8b-m78zl\" (UID: \"28ea82ea-bfeb-4a67-bac4-a97156c7995b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855587 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdc04075-1afc-4a7b-8b56-72d3ad508da5-serving-cert\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855610 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89zfg\" (UniqueName: \"kubernetes.io/projected/2317257d-494c-48b7-a69c-013e8b1d7d81-kube-api-access-89zfg\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855625 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4xml\" (UniqueName: \"kubernetes.io/projected/cf5157db-1776-4520-93cc-3af5f7c91511-kube-api-access-s4xml\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855639 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/275884c2-8599-4867-97aa-04d67ba35182-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855656 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855672 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ab5d7c4-9b5a-4919-9452-0f906a526bef-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7dprd\" (UID: \"9ab5d7c4-9b5a-4919-9452-0f906a526bef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855689 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b320b4bf-81a0-4d15-a48b-2d24f6014162-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8bwfr\" (UID: \"b320b4bf-81a0-4d15-a48b-2d24f6014162\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855721 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c15ecede-c840-4fc8-bc38-a970796c9517-console-serving-cert\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855759 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-bound-sa-token\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855776 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjdpw\" (UniqueName: \"kubernetes.io/projected/c15ecede-c840-4fc8-bc38-a970796c9517-kube-api-access-kjdpw\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855819 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv2sd\" (UniqueName: \"kubernetes.io/projected/4139cb8d-4ba9-4a77-8858-137229c972db-kube-api-access-bv2sd\") pod \"multus-admission-controller-857f4d67dd-kksxb\" (UID: \"4139cb8d-4ba9-4a77-8858-137229c972db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kksxb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855843 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0-trusted-ca\") pod \"ingress-operator-5b745b69d9-mhrgs\" (UID: \"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855879 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855895 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff434ccf-df42-4dac-b43c-3f68265d2a7b-config\") pod \"kube-apiserver-operator-766d6c64bb-v2srk\" (UID: \"ff434ccf-df42-4dac-b43c-3f68265d2a7b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855913 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855932 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28ea82ea-bfeb-4a67-bac4-a97156c7995b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-m78zl\" (UID: \"28ea82ea-bfeb-4a67-bac4-a97156c7995b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855964 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.855985 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee204f54-d95c-4d2d-8b56-2812a6843938-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4n9rx\" (UID: \"ee204f54-d95c-4d2d-8b56-2812a6843938\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4n9rx" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856021 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0-metrics-tls\") pod \"ingress-operator-5b745b69d9-mhrgs\" (UID: \"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856041 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c15ecede-c840-4fc8-bc38-a970796c9517-console-oauth-config\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856118 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/387127d5-a12c-4dee-96d9-47f8983e0356-apiservice-cert\") pod \"packageserver-d55dfcdfc-cltn2\" (UID: \"387127d5-a12c-4dee-96d9-47f8983e0356\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856141 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/275884c2-8599-4867-97aa-04d67ba35182-registry-certificates\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856159 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff434ccf-df42-4dac-b43c-3f68265d2a7b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-v2srk\" (UID: \"ff434ccf-df42-4dac-b43c-3f68265d2a7b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856183 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd8rg\" (UniqueName: \"kubernetes.io/projected/387127d5-a12c-4dee-96d9-47f8983e0356-kube-api-access-wd8rg\") pod \"packageserver-d55dfcdfc-cltn2\" (UID: \"387127d5-a12c-4dee-96d9-47f8983e0356\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856202 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2wnv\" (UniqueName: \"kubernetes.io/projected/2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0-kube-api-access-z2wnv\") pod \"ingress-operator-5b745b69d9-mhrgs\" (UID: \"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856264 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95l97\" (UniqueName: \"kubernetes.io/projected/8694e8ab-019b-41c1-bf7c-49d3ca10f213-kube-api-access-95l97\") pod \"dns-operator-744455d44c-l7frg\" (UID: \"8694e8ab-019b-41c1-bf7c-49d3ca10f213\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7frg" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856285 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bdc04075-1afc-4a7b-8b56-72d3ad508da5-etcd-client\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856320 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5157db-1776-4520-93cc-3af5f7c91511-service-ca-bundle\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856342 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/387127d5-a12c-4dee-96d9-47f8983e0356-webhook-cert\") pod \"packageserver-d55dfcdfc-cltn2\" (UID: \"387127d5-a12c-4dee-96d9-47f8983e0356\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856362 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b320b4bf-81a0-4d15-a48b-2d24f6014162-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8bwfr\" (UID: \"b320b4bf-81a0-4d15-a48b-2d24f6014162\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856377 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc04075-1afc-4a7b-8b56-72d3ad508da5-config\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856403 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31ac03e9-2f2e-4103-bcc8-f660c98d5a39-images\") pod \"machine-config-operator-74547568cd-p82hk\" (UID: \"31ac03e9-2f2e-4103-bcc8-f660c98d5a39\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856420 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856436 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3abe3fcd-61fb-4f6e-b196-dfc60163c86b-proxy-tls\") pod \"machine-config-controller-84d6567774-bbm9l\" (UID: \"3abe3fcd-61fb-4f6e-b196-dfc60163c86b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856450 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bdc04075-1afc-4a7b-8b56-72d3ad508da5-etcd-ca\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856479 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4139cb8d-4ba9-4a77-8858-137229c972db-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kksxb\" (UID: \"4139cb8d-4ba9-4a77-8858-137229c972db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kksxb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856504 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mhrgs\" (UID: \"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856522 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgtvf\" (UniqueName: \"kubernetes.io/projected/9fbb35dd-331f-43cb-8f92-f8a6f1e4ef1b-kube-api-access-pgtvf\") pod \"migrator-59844c95c7-wfrxf\" (UID: \"9fbb35dd-331f-43cb-8f92-f8a6f1e4ef1b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wfrxf" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856538 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856554 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9dhx\" (UniqueName: \"kubernetes.io/projected/b75345bf-b93f-471f-9b11-e5c5695e7e6a-kube-api-access-q9dhx\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856591 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-oauth-serving-cert\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856615 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31ac03e9-2f2e-4103-bcc8-f660c98d5a39-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p82hk\" (UID: \"31ac03e9-2f2e-4103-bcc8-f660c98d5a39\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856635 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3abe3fcd-61fb-4f6e-b196-dfc60163c86b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bbm9l\" (UID: \"3abe3fcd-61fb-4f6e-b196-dfc60163c86b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856661 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt499\" (UniqueName: \"kubernetes.io/projected/ee204f54-d95c-4d2d-8b56-2812a6843938-kube-api-access-gt499\") pod \"control-plane-machine-set-operator-78cbb6b69f-4n9rx\" (UID: \"ee204f54-d95c-4d2d-8b56-2812a6843938\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4n9rx" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856677 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f70c68fa-4429-40ac-9d4a-72f5b5086c94-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tft86\" (UID: \"f70c68fa-4429-40ac-9d4a-72f5b5086c94\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856692 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b75345bf-b93f-471f-9b11-e5c5695e7e6a-metrics-certs\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856693 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-w7mwr"] Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856709 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rmjk\" (UniqueName: \"kubernetes.io/projected/bdc04075-1afc-4a7b-8b56-72d3ad508da5-kube-api-access-5rmjk\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856795 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5cb39b3-d948-4b7d-88b4-1c4962e0c35a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kw78\" (UID: \"e5cb39b3-d948-4b7d-88b4-1c4962e0c35a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856839 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b75345bf-b93f-471f-9b11-e5c5695e7e6a-service-ca-bundle\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856864 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856892 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jdxh\" (UniqueName: \"kubernetes.io/projected/d1c02e5f-aebc-4edb-9e99-f26fc84a32ab-kube-api-access-4jdxh\") pod \"downloads-7954f5f757-vhlbl\" (UID: \"d1c02e5f-aebc-4edb-9e99-f26fc84a32ab\") " pod="openshift-console/downloads-7954f5f757-vhlbl" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856915 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5157db-1776-4520-93cc-3af5f7c91511-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.856935 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857016 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-trusted-ca-bundle\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857038 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hdwg\" (UniqueName: \"kubernetes.io/projected/b320b4bf-81a0-4d15-a48b-2d24f6014162-kube-api-access-5hdwg\") pod \"openshift-controller-manager-operator-756b6f6bc6-8bwfr\" (UID: \"b320b4bf-81a0-4d15-a48b-2d24f6014162\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857059 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31ac03e9-2f2e-4103-bcc8-f660c98d5a39-proxy-tls\") pod \"machine-config-operator-74547568cd-p82hk\" (UID: \"31ac03e9-2f2e-4103-bcc8-f660c98d5a39\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857108 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857127 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5cb39b3-d948-4b7d-88b4-1c4962e0c35a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kw78\" (UID: \"e5cb39b3-d948-4b7d-88b4-1c4962e0c35a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857150 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-registry-tls\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857171 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-audit-policies\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857189 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-console-config\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857232 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hdpw\" (UniqueName: \"kubernetes.io/projected/9ab5d7c4-9b5a-4919-9452-0f906a526bef-kube-api-access-4hdpw\") pod \"olm-operator-6b444d44fb-7dprd\" (UID: \"9ab5d7c4-9b5a-4919-9452-0f906a526bef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857254 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-service-ca\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857299 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/275884c2-8599-4867-97aa-04d67ba35182-trusted-ca\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857339 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2317257d-494c-48b7-a69c-013e8b1d7d81-audit-dir\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857356 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857373 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5cb39b3-d948-4b7d-88b4-1c4962e0c35a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kw78\" (UID: \"e5cb39b3-d948-4b7d-88b4-1c4962e0c35a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857394 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bdc04075-1afc-4a7b-8b56-72d3ad508da5-etcd-service-ca\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857409 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/387127d5-a12c-4dee-96d9-47f8983e0356-tmpfs\") pod \"packageserver-d55dfcdfc-cltn2\" (UID: \"387127d5-a12c-4dee-96d9-47f8983e0356\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857432 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/275884c2-8599-4867-97aa-04d67ba35182-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857446 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ab5d7c4-9b5a-4919-9452-0f906a526bef-srv-cert\") pod \"olm-operator-6b444d44fb-7dprd\" (UID: \"9ab5d7c4-9b5a-4919-9452-0f906a526bef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857472 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f22xn\" (UniqueName: \"kubernetes.io/projected/31ac03e9-2f2e-4103-bcc8-f660c98d5a39-kube-api-access-f22xn\") pod \"machine-config-operator-74547568cd-p82hk\" (UID: \"31ac03e9-2f2e-4103-bcc8-f660c98d5a39\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857498 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857513 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5157db-1776-4520-93cc-3af5f7c91511-serving-cert\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857527 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b75345bf-b93f-471f-9b11-e5c5695e7e6a-default-certificate\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857576 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25dqb\" (UniqueName: \"kubernetes.io/projected/3abe3fcd-61fb-4f6e-b196-dfc60163c86b-kube-api-access-25dqb\") pod \"machine-config-controller-84d6567774-bbm9l\" (UID: \"3abe3fcd-61fb-4f6e-b196-dfc60163c86b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857593 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8694e8ab-019b-41c1-bf7c-49d3ca10f213-metrics-tls\") pod \"dns-operator-744455d44c-l7frg\" (UID: \"8694e8ab-019b-41c1-bf7c-49d3ca10f213\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7frg" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857609 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf5157db-1776-4520-93cc-3af5f7c91511-config\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857646 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t4zj\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-kube-api-access-4t4zj\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857661 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff434ccf-df42-4dac-b43c-3f68265d2a7b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-v2srk\" (UID: \"ff434ccf-df42-4dac-b43c-3f68265d2a7b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.857676 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/28ea82ea-bfeb-4a67-bac4-a97156c7995b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-m78zl\" (UID: \"28ea82ea-bfeb-4a67-bac4-a97156c7995b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" Feb 24 09:58:42 crc kubenswrapper[4755]: E0224 09:58:42.865045 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:43.364874722 +0000 UTC m=+227.821397265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.884755 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw"] Feb 24 09:58:42 crc kubenswrapper[4755]: W0224 09:58:42.889180 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4ab2300_1f56_4c83_a156_78efcd90937a.slice/crio-5c1a3325e32e53464255abdf8cc9f44756c466f0e7998e232dfa23a3a9b1a385 WatchSource:0}: Error finding container 5c1a3325e32e53464255abdf8cc9f44756c466f0e7998e232dfa23a3a9b1a385: Status 404 returned error can't find the container with id 5c1a3325e32e53464255abdf8cc9f44756c466f0e7998e232dfa23a3a9b1a385 Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.958426 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:42 crc kubenswrapper[4755]: E0224 09:58:42.958620 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:43.458569823 +0000 UTC m=+227.915092366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.958693 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f22xn\" (UniqueName: \"kubernetes.io/projected/31ac03e9-2f2e-4103-bcc8-f660c98d5a39-kube-api-access-f22xn\") pod \"machine-config-operator-74547568cd-p82hk\" (UID: \"31ac03e9-2f2e-4103-bcc8-f660c98d5a39\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.958762 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.958796 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5157db-1776-4520-93cc-3af5f7c91511-serving-cert\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.958846 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b75345bf-b93f-471f-9b11-e5c5695e7e6a-default-certificate\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.958874 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f3eb31-20c7-4549-9df4-979045249f81-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7qxdc\" (UID: \"a4f3eb31-20c7-4549-9df4-979045249f81\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.958932 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25dqb\" (UniqueName: \"kubernetes.io/projected/3abe3fcd-61fb-4f6e-b196-dfc60163c86b-kube-api-access-25dqb\") pod \"machine-config-controller-84d6567774-bbm9l\" (UID: \"3abe3fcd-61fb-4f6e-b196-dfc60163c86b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.958963 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8694e8ab-019b-41c1-bf7c-49d3ca10f213-metrics-tls\") pod \"dns-operator-744455d44c-l7frg\" (UID: \"8694e8ab-019b-41c1-bf7c-49d3ca10f213\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7frg" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959007 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d8a1f03-1c60-409f-85c0-5d2dad7e624c-config-volume\") pod \"dns-default-lj8d5\" (UID: \"0d8a1f03-1c60-409f-85c0-5d2dad7e624c\") " pod="openshift-dns/dns-default-lj8d5" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959028 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mdfn\" (UniqueName: \"kubernetes.io/projected/0d8a1f03-1c60-409f-85c0-5d2dad7e624c-kube-api-access-2mdfn\") pod \"dns-default-lj8d5\" (UID: \"0d8a1f03-1c60-409f-85c0-5d2dad7e624c\") " pod="openshift-dns/dns-default-lj8d5" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959051 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf5157db-1776-4520-93cc-3af5f7c91511-config\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959098 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/279eff18-be0a-4f07-81fc-69fef8faac6c-secret-volume\") pod \"collect-profiles-29532105-sr449\" (UID: \"279eff18-be0a-4f07-81fc-69fef8faac6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959125 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4fnp\" (UniqueName: \"kubernetes.io/projected/68f685bc-87e6-44fb-97dd-798196ff677d-kube-api-access-s4fnp\") pod \"catalog-operator-68c6474976-w47s8\" (UID: \"68f685bc-87e6-44fb-97dd-798196ff677d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959166 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a99991-5cb3-4242-b354-ee3908088b65-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-stj9j\" (UID: \"81a99991-5cb3-4242-b354-ee3908088b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959195 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t4zj\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-kube-api-access-4t4zj\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959218 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff434ccf-df42-4dac-b43c-3f68265d2a7b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-v2srk\" (UID: \"ff434ccf-df42-4dac-b43c-3f68265d2a7b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959259 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/28ea82ea-bfeb-4a67-bac4-a97156c7995b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-m78zl\" (UID: \"28ea82ea-bfeb-4a67-bac4-a97156c7995b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959291 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959333 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqq59\" (UniqueName: \"kubernetes.io/projected/f70c68fa-4429-40ac-9d4a-72f5b5086c94-kube-api-access-rqq59\") pod \"package-server-manager-789f6589d5-tft86\" (UID: \"f70c68fa-4429-40ac-9d4a-72f5b5086c94\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959364 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28ea82ea-bfeb-4a67-bac4-a97156c7995b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-m78zl\" (UID: \"28ea82ea-bfeb-4a67-bac4-a97156c7995b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959404 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d2406ef-c2fd-495a-82fd-00ef8257e99e-cert\") pod \"ingress-canary-dgmbb\" (UID: \"1d2406ef-c2fd-495a-82fd-00ef8257e99e\") " pod="openshift-ingress-canary/ingress-canary-dgmbb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959428 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b75345bf-b93f-471f-9b11-e5c5695e7e6a-stats-auth\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959450 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sprnm\" (UniqueName: \"kubernetes.io/projected/28ea82ea-bfeb-4a67-bac4-a97156c7995b-kube-api-access-sprnm\") pod \"cluster-image-registry-operator-dc59b4c8b-m78zl\" (UID: \"28ea82ea-bfeb-4a67-bac4-a97156c7995b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959509 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4xml\" (UniqueName: \"kubernetes.io/projected/cf5157db-1776-4520-93cc-3af5f7c91511-kube-api-access-s4xml\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959532 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdc04075-1afc-4a7b-8b56-72d3ad508da5-serving-cert\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959573 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e76542af-c648-4aac-8cc1-e07d238c6a2c-serving-cert\") pod \"service-ca-operator-777779d784-62kx9\" (UID: \"e76542af-c648-4aac-8cc1-e07d238c6a2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62kx9" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959599 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89zfg\" (UniqueName: \"kubernetes.io/projected/2317257d-494c-48b7-a69c-013e8b1d7d81-kube-api-access-89zfg\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959623 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/275884c2-8599-4867-97aa-04d67ba35182-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959667 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/324736a7-6465-419b-8d2a-0a8e678f5421-signing-cabundle\") pod \"service-ca-9c57cc56f-t4p5v\" (UID: \"324736a7-6465-419b-8d2a-0a8e678f5421\") " pod="openshift-service-ca/service-ca-9c57cc56f-t4p5v" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959696 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959741 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ab5d7c4-9b5a-4919-9452-0f906a526bef-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7dprd\" (UID: \"9ab5d7c4-9b5a-4919-9452-0f906a526bef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959766 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b320b4bf-81a0-4d15-a48b-2d24f6014162-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8bwfr\" (UID: \"b320b4bf-81a0-4d15-a48b-2d24f6014162\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959807 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k64dq\" (UniqueName: \"kubernetes.io/projected/81a99991-5cb3-4242-b354-ee3908088b65-kube-api-access-k64dq\") pod \"kube-storage-version-migrator-operator-b67b599dd-stj9j\" (UID: \"81a99991-5cb3-4242-b354-ee3908088b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959856 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-bound-sa-token\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959897 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c15ecede-c840-4fc8-bc38-a970796c9517-console-serving-cert\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959923 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e105e7e0-0046-47ca-8a73-c27e385a0301-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tp8zp\" (UID: \"e105e7e0-0046-47ca-8a73-c27e385a0301\") " pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959962 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vthnr\" (UniqueName: \"kubernetes.io/projected/1d2406ef-c2fd-495a-82fd-00ef8257e99e-kube-api-access-vthnr\") pod \"ingress-canary-dgmbb\" (UID: \"1d2406ef-c2fd-495a-82fd-00ef8257e99e\") " pod="openshift-ingress-canary/ingress-canary-dgmbb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.959990 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjdpw\" (UniqueName: \"kubernetes.io/projected/c15ecede-c840-4fc8-bc38-a970796c9517-kube-api-access-kjdpw\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.960017 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv2sd\" (UniqueName: \"kubernetes.io/projected/4139cb8d-4ba9-4a77-8858-137229c972db-kube-api-access-bv2sd\") pod \"multus-admission-controller-857f4d67dd-kksxb\" (UID: \"4139cb8d-4ba9-4a77-8858-137229c972db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kksxb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.960127 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0-trusted-ca\") pod \"ingress-operator-5b745b69d9-mhrgs\" (UID: \"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.960161 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.960205 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.960230 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff434ccf-df42-4dac-b43c-3f68265d2a7b-config\") pod \"kube-apiserver-operator-766d6c64bb-v2srk\" (UID: \"ff434ccf-df42-4dac-b43c-3f68265d2a7b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.960256 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28ea82ea-bfeb-4a67-bac4-a97156c7995b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-m78zl\" (UID: \"28ea82ea-bfeb-4a67-bac4-a97156c7995b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.960297 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/68f685bc-87e6-44fb-97dd-798196ff677d-profile-collector-cert\") pod \"catalog-operator-68c6474976-w47s8\" (UID: \"68f685bc-87e6-44fb-97dd-798196ff677d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.960328 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.960372 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee204f54-d95c-4d2d-8b56-2812a6843938-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4n9rx\" (UID: \"ee204f54-d95c-4d2d-8b56-2812a6843938\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4n9rx" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.960401 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c15ecede-c840-4fc8-bc38-a970796c9517-console-oauth-config\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.960439 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0-metrics-tls\") pod \"ingress-operator-5b745b69d9-mhrgs\" (UID: \"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.960462 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw8tl\" (UniqueName: \"kubernetes.io/projected/e105e7e0-0046-47ca-8a73-c27e385a0301-kube-api-access-sw8tl\") pod \"marketplace-operator-79b997595-tp8zp\" (UID: \"e105e7e0-0046-47ca-8a73-c27e385a0301\") " pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.960487 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/387127d5-a12c-4dee-96d9-47f8983e0356-apiservice-cert\") pod \"packageserver-d55dfcdfc-cltn2\" (UID: \"387127d5-a12c-4dee-96d9-47f8983e0356\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961481 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a99991-5cb3-4242-b354-ee3908088b65-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-stj9j\" (UID: \"81a99991-5cb3-4242-b354-ee3908088b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961539 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/443ac64a-4fb6-4571-8c73-e6cbf9491dbc-node-bootstrap-token\") pod \"machine-config-server-5brsg\" (UID: \"443ac64a-4fb6-4571-8c73-e6cbf9491dbc\") " pod="openshift-machine-config-operator/machine-config-server-5brsg" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961562 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jw6t\" (UniqueName: \"kubernetes.io/projected/324736a7-6465-419b-8d2a-0a8e678f5421-kube-api-access-8jw6t\") pod \"service-ca-9c57cc56f-t4p5v\" (UID: \"324736a7-6465-419b-8d2a-0a8e678f5421\") " pod="openshift-service-ca/service-ca-9c57cc56f-t4p5v" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961586 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/275884c2-8599-4867-97aa-04d67ba35182-registry-certificates\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961630 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff434ccf-df42-4dac-b43c-3f68265d2a7b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-v2srk\" (UID: \"ff434ccf-df42-4dac-b43c-3f68265d2a7b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961652 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd8rg\" (UniqueName: \"kubernetes.io/projected/387127d5-a12c-4dee-96d9-47f8983e0356-kube-api-access-wd8rg\") pod \"packageserver-d55dfcdfc-cltn2\" (UID: \"387127d5-a12c-4dee-96d9-47f8983e0356\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961703 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2wnv\" (UniqueName: \"kubernetes.io/projected/2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0-kube-api-access-z2wnv\") pod \"ingress-operator-5b745b69d9-mhrgs\" (UID: \"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961727 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcjxs\" (UniqueName: \"kubernetes.io/projected/279eff18-be0a-4f07-81fc-69fef8faac6c-kube-api-access-gcjxs\") pod \"collect-profiles-29532105-sr449\" (UID: \"279eff18-be0a-4f07-81fc-69fef8faac6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961793 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95l97\" (UniqueName: \"kubernetes.io/projected/8694e8ab-019b-41c1-bf7c-49d3ca10f213-kube-api-access-95l97\") pod \"dns-operator-744455d44c-l7frg\" (UID: \"8694e8ab-019b-41c1-bf7c-49d3ca10f213\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7frg" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961821 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bdc04075-1afc-4a7b-8b56-72d3ad508da5-etcd-client\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961872 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/68f685bc-87e6-44fb-97dd-798196ff677d-srv-cert\") pod \"catalog-operator-68c6474976-w47s8\" (UID: \"68f685bc-87e6-44fb-97dd-798196ff677d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961898 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5157db-1776-4520-93cc-3af5f7c91511-service-ca-bundle\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961945 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/387127d5-a12c-4dee-96d9-47f8983e0356-webhook-cert\") pod \"packageserver-d55dfcdfc-cltn2\" (UID: \"387127d5-a12c-4dee-96d9-47f8983e0356\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961968 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31ac03e9-2f2e-4103-bcc8-f660c98d5a39-images\") pod \"machine-config-operator-74547568cd-p82hk\" (UID: \"31ac03e9-2f2e-4103-bcc8-f660c98d5a39\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.961992 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b320b4bf-81a0-4d15-a48b-2d24f6014162-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8bwfr\" (UID: \"b320b4bf-81a0-4d15-a48b-2d24f6014162\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.962032 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc04075-1afc-4a7b-8b56-72d3ad508da5-config\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.962057 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-socket-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.962110 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-registration-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.962140 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.962188 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4139cb8d-4ba9-4a77-8858-137229c972db-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kksxb\" (UID: \"4139cb8d-4ba9-4a77-8858-137229c972db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kksxb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.962214 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3abe3fcd-61fb-4f6e-b196-dfc60163c86b-proxy-tls\") pod \"machine-config-controller-84d6567774-bbm9l\" (UID: \"3abe3fcd-61fb-4f6e-b196-dfc60163c86b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.962257 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bdc04075-1afc-4a7b-8b56-72d3ad508da5-etcd-ca\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.962286 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-plugins-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.962332 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mhrgs\" (UID: \"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.962364 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgtvf\" (UniqueName: \"kubernetes.io/projected/9fbb35dd-331f-43cb-8f92-f8a6f1e4ef1b-kube-api-access-pgtvf\") pod \"migrator-59844c95c7-wfrxf\" (UID: \"9fbb35dd-331f-43cb-8f92-f8a6f1e4ef1b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wfrxf" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.962390 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.962434 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9dhx\" (UniqueName: \"kubernetes.io/projected/b75345bf-b93f-471f-9b11-e5c5695e7e6a-kube-api-access-q9dhx\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.962461 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjz2z\" (UniqueName: \"kubernetes.io/projected/e76542af-c648-4aac-8cc1-e07d238c6a2c-kube-api-access-tjz2z\") pod \"service-ca-operator-777779d784-62kx9\" (UID: \"e76542af-c648-4aac-8cc1-e07d238c6a2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62kx9" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.963742 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-oauth-serving-cert\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.963781 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt499\" (UniqueName: \"kubernetes.io/projected/ee204f54-d95c-4d2d-8b56-2812a6843938-kube-api-access-gt499\") pod \"control-plane-machine-set-operator-78cbb6b69f-4n9rx\" (UID: \"ee204f54-d95c-4d2d-8b56-2812a6843938\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4n9rx" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.963861 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31ac03e9-2f2e-4103-bcc8-f660c98d5a39-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p82hk\" (UID: \"31ac03e9-2f2e-4103-bcc8-f660c98d5a39\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.963909 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3abe3fcd-61fb-4f6e-b196-dfc60163c86b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bbm9l\" (UID: \"3abe3fcd-61fb-4f6e-b196-dfc60163c86b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.963944 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f70c68fa-4429-40ac-9d4a-72f5b5086c94-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tft86\" (UID: \"f70c68fa-4429-40ac-9d4a-72f5b5086c94\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.963987 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b75345bf-b93f-471f-9b11-e5c5695e7e6a-metrics-certs\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964014 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rmjk\" (UniqueName: \"kubernetes.io/projected/bdc04075-1afc-4a7b-8b56-72d3ad508da5-kube-api-access-5rmjk\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964042 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b75345bf-b93f-471f-9b11-e5c5695e7e6a-service-ca-bundle\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964132 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5cb39b3-d948-4b7d-88b4-1c4962e0c35a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kw78\" (UID: \"e5cb39b3-d948-4b7d-88b4-1c4962e0c35a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964165 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e105e7e0-0046-47ca-8a73-c27e385a0301-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tp8zp\" (UID: \"e105e7e0-0046-47ca-8a73-c27e385a0301\") " pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964208 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-csi-data-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964236 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964287 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5157db-1776-4520-93cc-3af5f7c91511-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964316 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jdxh\" (UniqueName: \"kubernetes.io/projected/d1c02e5f-aebc-4edb-9e99-f26fc84a32ab-kube-api-access-4jdxh\") pod \"downloads-7954f5f757-vhlbl\" (UID: \"d1c02e5f-aebc-4edb-9e99-f26fc84a32ab\") " pod="openshift-console/downloads-7954f5f757-vhlbl" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964363 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964404 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d8a1f03-1c60-409f-85c0-5d2dad7e624c-metrics-tls\") pod \"dns-default-lj8d5\" (UID: \"0d8a1f03-1c60-409f-85c0-5d2dad7e624c\") " pod="openshift-dns/dns-default-lj8d5" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964455 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-246nf\" (UniqueName: \"kubernetes.io/projected/62383901-634c-43e5-9177-4be83e12d514-kube-api-access-246nf\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964480 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4f3eb31-20c7-4549-9df4-979045249f81-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7qxdc\" (UID: \"a4f3eb31-20c7-4549-9df4-979045249f81\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964800 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e76542af-c648-4aac-8cc1-e07d238c6a2c-config\") pod \"service-ca-operator-777779d784-62kx9\" (UID: \"e76542af-c648-4aac-8cc1-e07d238c6a2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62kx9" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964825 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-trusted-ca-bundle\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964869 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31ac03e9-2f2e-4103-bcc8-f660c98d5a39-proxy-tls\") pod \"machine-config-operator-74547568cd-p82hk\" (UID: \"31ac03e9-2f2e-4103-bcc8-f660c98d5a39\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964893 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hdwg\" (UniqueName: \"kubernetes.io/projected/b320b4bf-81a0-4d15-a48b-2d24f6014162-kube-api-access-5hdwg\") pod \"openshift-controller-manager-operator-756b6f6bc6-8bwfr\" (UID: \"b320b4bf-81a0-4d15-a48b-2d24f6014162\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964916 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/324736a7-6465-419b-8d2a-0a8e678f5421-signing-key\") pod \"service-ca-9c57cc56f-t4p5v\" (UID: \"324736a7-6465-419b-8d2a-0a8e678f5421\") " pod="openshift-service-ca/service-ca-9c57cc56f-t4p5v" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.964979 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.965019 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5cb39b3-d948-4b7d-88b4-1c4962e0c35a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kw78\" (UID: \"e5cb39b3-d948-4b7d-88b4-1c4962e0c35a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.965045 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-audit-policies\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.965094 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-console-config\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.965122 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-registry-tls\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.965171 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hdpw\" (UniqueName: \"kubernetes.io/projected/9ab5d7c4-9b5a-4919-9452-0f906a526bef-kube-api-access-4hdpw\") pod \"olm-operator-6b444d44fb-7dprd\" (UID: \"9ab5d7c4-9b5a-4919-9452-0f906a526bef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.965197 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/275884c2-8599-4867-97aa-04d67ba35182-trusted-ca\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.965222 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-service-ca\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.965270 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-mountpoint-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.965299 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2317257d-494c-48b7-a69c-013e8b1d7d81-audit-dir\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.965345 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.966999 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf5157db-1776-4520-93cc-3af5f7c91511-config\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.967684 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff434ccf-df42-4dac-b43c-3f68265d2a7b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-v2srk\" (UID: \"ff434ccf-df42-4dac-b43c-3f68265d2a7b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.970344 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d"] Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.970364 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b75345bf-b93f-471f-9b11-e5c5695e7e6a-default-certificate\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.970981 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28ea82ea-bfeb-4a67-bac4-a97156c7995b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-m78zl\" (UID: \"28ea82ea-bfeb-4a67-bac4-a97156c7995b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.971473 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5cb39b3-d948-4b7d-88b4-1c4962e0c35a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kw78\" (UID: \"e5cb39b3-d948-4b7d-88b4-1c4962e0c35a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.971537 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/279eff18-be0a-4f07-81fc-69fef8faac6c-config-volume\") pod \"collect-profiles-29532105-sr449\" (UID: \"279eff18-be0a-4f07-81fc-69fef8faac6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.971573 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/443ac64a-4fb6-4571-8c73-e6cbf9491dbc-certs\") pod \"machine-config-server-5brsg\" (UID: \"443ac64a-4fb6-4571-8c73-e6cbf9491dbc\") " pod="openshift-machine-config-operator/machine-config-server-5brsg" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.971601 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r8gl\" (UniqueName: \"kubernetes.io/projected/443ac64a-4fb6-4571-8c73-e6cbf9491dbc-kube-api-access-8r8gl\") pod \"machine-config-server-5brsg\" (UID: \"443ac64a-4fb6-4571-8c73-e6cbf9491dbc\") " pod="openshift-machine-config-operator/machine-config-server-5brsg" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.971632 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bdc04075-1afc-4a7b-8b56-72d3ad508da5-etcd-service-ca\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.971660 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f3eb31-20c7-4549-9df4-979045249f81-config\") pod \"kube-controller-manager-operator-78b949d7b-7qxdc\" (UID: \"a4f3eb31-20c7-4549-9df4-979045249f81\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.971693 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/275884c2-8599-4867-97aa-04d67ba35182-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.971719 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/387127d5-a12c-4dee-96d9-47f8983e0356-tmpfs\") pod \"packageserver-d55dfcdfc-cltn2\" (UID: \"387127d5-a12c-4dee-96d9-47f8983e0356\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.971763 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ab5d7c4-9b5a-4919-9452-0f906a526bef-srv-cert\") pod \"olm-operator-6b444d44fb-7dprd\" (UID: \"9ab5d7c4-9b5a-4919-9452-0f906a526bef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.972153 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0-trusted-ca\") pod \"ingress-operator-5b745b69d9-mhrgs\" (UID: \"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" Feb 24 09:58:42 crc kubenswrapper[4755]: E0224 09:58:42.972270 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:43.472257802 +0000 UTC m=+227.928780345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.973708 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b320b4bf-81a0-4d15-a48b-2d24f6014162-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8bwfr\" (UID: \"b320b4bf-81a0-4d15-a48b-2d24f6014162\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.974121 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff434ccf-df42-4dac-b43c-3f68265d2a7b-config\") pod \"kube-apiserver-operator-766d6c64bb-v2srk\" (UID: \"ff434ccf-df42-4dac-b43c-3f68265d2a7b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.974437 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-oauth-serving-cert\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.974697 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/275884c2-8599-4867-97aa-04d67ba35182-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.975432 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/387127d5-a12c-4dee-96d9-47f8983e0356-tmpfs\") pod \"packageserver-d55dfcdfc-cltn2\" (UID: \"387127d5-a12c-4dee-96d9-47f8983e0356\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.975607 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3abe3fcd-61fb-4f6e-b196-dfc60163c86b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bbm9l\" (UID: \"3abe3fcd-61fb-4f6e-b196-dfc60163c86b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.975639 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31ac03e9-2f2e-4103-bcc8-f660c98d5a39-images\") pod \"machine-config-operator-74547568cd-p82hk\" (UID: \"31ac03e9-2f2e-4103-bcc8-f660c98d5a39\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.976127 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5157db-1776-4520-93cc-3af5f7c91511-service-ca-bundle\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.976161 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.977583 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bdc04075-1afc-4a7b-8b56-72d3ad508da5-etcd-service-ca\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.977706 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31ac03e9-2f2e-4103-bcc8-f660c98d5a39-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p82hk\" (UID: \"31ac03e9-2f2e-4103-bcc8-f660c98d5a39\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.977900 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdc04075-1afc-4a7b-8b56-72d3ad508da5-serving-cert\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.979101 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.979452 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c15ecede-c840-4fc8-bc38-a970796c9517-console-serving-cert\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.979511 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.979907 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b75345bf-b93f-471f-9b11-e5c5695e7e6a-service-ca-bundle\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.980447 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5cb39b3-d948-4b7d-88b4-1c4962e0c35a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kw78\" (UID: \"e5cb39b3-d948-4b7d-88b4-1c4962e0c35a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.980653 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bdc04075-1afc-4a7b-8b56-72d3ad508da5-etcd-ca\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.981562 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.982135 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/275884c2-8599-4867-97aa-04d67ba35182-registry-certificates\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.982150 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5157db-1776-4520-93cc-3af5f7c91511-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.982256 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4139cb8d-4ba9-4a77-8858-137229c972db-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kksxb\" (UID: \"4139cb8d-4ba9-4a77-8858-137229c972db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kksxb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.983006 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2317257d-494c-48b7-a69c-013e8b1d7d81-audit-dir\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.983582 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/275884c2-8599-4867-97aa-04d67ba35182-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.983965 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-service-ca\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.984622 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-trusted-ca-bundle\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.984653 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8694e8ab-019b-41c1-bf7c-49d3ca10f213-metrics-tls\") pod \"dns-operator-744455d44c-l7frg\" (UID: \"8694e8ab-019b-41c1-bf7c-49d3ca10f213\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7frg" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.985035 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-audit-policies\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.975612 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc04075-1afc-4a7b-8b56-72d3ad508da5-config\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.986153 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-console-config\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.987385 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5157db-1776-4520-93cc-3af5f7c91511-serving-cert\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.990309 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b75345bf-b93f-471f-9b11-e5c5695e7e6a-metrics-certs\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.990444 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-registry-tls\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.991403 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.991511 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3abe3fcd-61fb-4f6e-b196-dfc60163c86b-proxy-tls\") pod \"machine-config-controller-84d6567774-bbm9l\" (UID: \"3abe3fcd-61fb-4f6e-b196-dfc60163c86b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.991828 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31ac03e9-2f2e-4103-bcc8-f660c98d5a39-proxy-tls\") pod \"machine-config-operator-74547568cd-p82hk\" (UID: \"31ac03e9-2f2e-4103-bcc8-f660c98d5a39\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.991960 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f70c68fa-4429-40ac-9d4a-72f5b5086c94-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tft86\" (UID: \"f70c68fa-4429-40ac-9d4a-72f5b5086c94\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.992094 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0-metrics-tls\") pod \"ingress-operator-5b745b69d9-mhrgs\" (UID: \"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.993028 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c15ecede-c840-4fc8-bc38-a970796c9517-console-oauth-config\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.993097 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee204f54-d95c-4d2d-8b56-2812a6843938-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4n9rx\" (UID: \"ee204f54-d95c-4d2d-8b56-2812a6843938\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4n9rx" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.993204 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ab5d7c4-9b5a-4919-9452-0f906a526bef-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7dprd\" (UID: \"9ab5d7c4-9b5a-4919-9452-0f906a526bef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.993674 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:42 crc kubenswrapper[4755]: I0224 09:58:42.996115 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/275884c2-8599-4867-97aa-04d67ba35182-trusted-ca\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.002597 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5cb39b3-d948-4b7d-88b4-1c4962e0c35a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kw78\" (UID: \"e5cb39b3-d948-4b7d-88b4-1c4962e0c35a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.002686 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.002703 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.002760 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bdc04075-1afc-4a7b-8b56-72d3ad508da5-etcd-client\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.003004 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/387127d5-a12c-4dee-96d9-47f8983e0356-apiservice-cert\") pod \"packageserver-d55dfcdfc-cltn2\" (UID: \"387127d5-a12c-4dee-96d9-47f8983e0356\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.003176 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.003407 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.003589 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b320b4bf-81a0-4d15-a48b-2d24f6014162-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8bwfr\" (UID: \"b320b4bf-81a0-4d15-a48b-2d24f6014162\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.003645 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ab5d7c4-9b5a-4919-9452-0f906a526bef-srv-cert\") pod \"olm-operator-6b444d44fb-7dprd\" (UID: \"9ab5d7c4-9b5a-4919-9452-0f906a526bef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.003728 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b75345bf-b93f-471f-9b11-e5c5695e7e6a-stats-auth\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.004094 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/387127d5-a12c-4dee-96d9-47f8983e0356-webhook-cert\") pod \"packageserver-d55dfcdfc-cltn2\" (UID: \"387127d5-a12c-4dee-96d9-47f8983e0356\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.005412 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.006219 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f22xn\" (UniqueName: \"kubernetes.io/projected/31ac03e9-2f2e-4103-bcc8-f660c98d5a39-kube-api-access-f22xn\") pod \"machine-config-operator-74547568cd-p82hk\" (UID: \"31ac03e9-2f2e-4103-bcc8-f660c98d5a39\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.006356 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/28ea82ea-bfeb-4a67-bac4-a97156c7995b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-m78zl\" (UID: \"28ea82ea-bfeb-4a67-bac4-a97156c7995b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.020751 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t4zj\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-kube-api-access-4t4zj\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.025977 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xpvsw"] Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.038246 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-bound-sa-token\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.059422 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjdpw\" (UniqueName: \"kubernetes.io/projected/c15ecede-c840-4fc8-bc38-a970796c9517-kube-api-access-kjdpw\") pod \"console-f9d7485db-fqscc\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.064787 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wrwg8"] Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.072629 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:43 crc kubenswrapper[4755]: E0224 09:58:43.072770 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:43.572745566 +0000 UTC m=+228.029268119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.072842 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-mountpoint-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.072868 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/279eff18-be0a-4f07-81fc-69fef8faac6c-config-volume\") pod \"collect-profiles-29532105-sr449\" (UID: \"279eff18-be0a-4f07-81fc-69fef8faac6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.072891 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/443ac64a-4fb6-4571-8c73-e6cbf9491dbc-certs\") pod \"machine-config-server-5brsg\" (UID: \"443ac64a-4fb6-4571-8c73-e6cbf9491dbc\") " pod="openshift-machine-config-operator/machine-config-server-5brsg" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073358 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r8gl\" (UniqueName: \"kubernetes.io/projected/443ac64a-4fb6-4571-8c73-e6cbf9491dbc-kube-api-access-8r8gl\") pod \"machine-config-server-5brsg\" (UID: \"443ac64a-4fb6-4571-8c73-e6cbf9491dbc\") " pod="openshift-machine-config-operator/machine-config-server-5brsg" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073379 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f3eb31-20c7-4549-9df4-979045249f81-config\") pod \"kube-controller-manager-operator-78b949d7b-7qxdc\" (UID: \"a4f3eb31-20c7-4549-9df4-979045249f81\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.072958 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-mountpoint-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073426 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f3eb31-20c7-4549-9df4-979045249f81-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7qxdc\" (UID: \"a4f3eb31-20c7-4549-9df4-979045249f81\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073489 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d8a1f03-1c60-409f-85c0-5d2dad7e624c-config-volume\") pod \"dns-default-lj8d5\" (UID: \"0d8a1f03-1c60-409f-85c0-5d2dad7e624c\") " pod="openshift-dns/dns-default-lj8d5" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073516 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mdfn\" (UniqueName: \"kubernetes.io/projected/0d8a1f03-1c60-409f-85c0-5d2dad7e624c-kube-api-access-2mdfn\") pod \"dns-default-lj8d5\" (UID: \"0d8a1f03-1c60-409f-85c0-5d2dad7e624c\") " pod="openshift-dns/dns-default-lj8d5" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073541 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/279eff18-be0a-4f07-81fc-69fef8faac6c-secret-volume\") pod \"collect-profiles-29532105-sr449\" (UID: \"279eff18-be0a-4f07-81fc-69fef8faac6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073566 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4fnp\" (UniqueName: \"kubernetes.io/projected/68f685bc-87e6-44fb-97dd-798196ff677d-kube-api-access-s4fnp\") pod \"catalog-operator-68c6474976-w47s8\" (UID: \"68f685bc-87e6-44fb-97dd-798196ff677d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073589 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a99991-5cb3-4242-b354-ee3908088b65-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-stj9j\" (UID: \"81a99991-5cb3-4242-b354-ee3908088b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073622 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073659 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d2406ef-c2fd-495a-82fd-00ef8257e99e-cert\") pod \"ingress-canary-dgmbb\" (UID: \"1d2406ef-c2fd-495a-82fd-00ef8257e99e\") " pod="openshift-ingress-canary/ingress-canary-dgmbb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073694 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e76542af-c648-4aac-8cc1-e07d238c6a2c-serving-cert\") pod \"service-ca-operator-777779d784-62kx9\" (UID: \"e76542af-c648-4aac-8cc1-e07d238c6a2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62kx9" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073718 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/324736a7-6465-419b-8d2a-0a8e678f5421-signing-cabundle\") pod \"service-ca-9c57cc56f-t4p5v\" (UID: \"324736a7-6465-419b-8d2a-0a8e678f5421\") " pod="openshift-service-ca/service-ca-9c57cc56f-t4p5v" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073747 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/279eff18-be0a-4f07-81fc-69fef8faac6c-config-volume\") pod \"collect-profiles-29532105-sr449\" (UID: \"279eff18-be0a-4f07-81fc-69fef8faac6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073753 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k64dq\" (UniqueName: \"kubernetes.io/projected/81a99991-5cb3-4242-b354-ee3908088b65-kube-api-access-k64dq\") pod \"kube-storage-version-migrator-operator-b67b599dd-stj9j\" (UID: \"81a99991-5cb3-4242-b354-ee3908088b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073836 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e105e7e0-0046-47ca-8a73-c27e385a0301-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tp8zp\" (UID: \"e105e7e0-0046-47ca-8a73-c27e385a0301\") " pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073880 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vthnr\" (UniqueName: \"kubernetes.io/projected/1d2406ef-c2fd-495a-82fd-00ef8257e99e-kube-api-access-vthnr\") pod \"ingress-canary-dgmbb\" (UID: \"1d2406ef-c2fd-495a-82fd-00ef8257e99e\") " pod="openshift-ingress-canary/ingress-canary-dgmbb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073946 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/68f685bc-87e6-44fb-97dd-798196ff677d-profile-collector-cert\") pod \"catalog-operator-68c6474976-w47s8\" (UID: \"68f685bc-87e6-44fb-97dd-798196ff677d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.073988 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw8tl\" (UniqueName: \"kubernetes.io/projected/e105e7e0-0046-47ca-8a73-c27e385a0301-kube-api-access-sw8tl\") pod \"marketplace-operator-79b997595-tp8zp\" (UID: \"e105e7e0-0046-47ca-8a73-c27e385a0301\") " pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074018 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a99991-5cb3-4242-b354-ee3908088b65-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-stj9j\" (UID: \"81a99991-5cb3-4242-b354-ee3908088b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074041 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/443ac64a-4fb6-4571-8c73-e6cbf9491dbc-node-bootstrap-token\") pod \"machine-config-server-5brsg\" (UID: \"443ac64a-4fb6-4571-8c73-e6cbf9491dbc\") " pod="openshift-machine-config-operator/machine-config-server-5brsg" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074088 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jw6t\" (UniqueName: \"kubernetes.io/projected/324736a7-6465-419b-8d2a-0a8e678f5421-kube-api-access-8jw6t\") pod \"service-ca-9c57cc56f-t4p5v\" (UID: \"324736a7-6465-419b-8d2a-0a8e678f5421\") " pod="openshift-service-ca/service-ca-9c57cc56f-t4p5v" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074143 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcjxs\" (UniqueName: \"kubernetes.io/projected/279eff18-be0a-4f07-81fc-69fef8faac6c-kube-api-access-gcjxs\") pod \"collect-profiles-29532105-sr449\" (UID: \"279eff18-be0a-4f07-81fc-69fef8faac6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074196 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/68f685bc-87e6-44fb-97dd-798196ff677d-srv-cert\") pod \"catalog-operator-68c6474976-w47s8\" (UID: \"68f685bc-87e6-44fb-97dd-798196ff677d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074229 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-socket-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074254 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-registration-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074286 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-plugins-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074340 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjz2z\" (UniqueName: \"kubernetes.io/projected/e76542af-c648-4aac-8cc1-e07d238c6a2c-kube-api-access-tjz2z\") pod \"service-ca-operator-777779d784-62kx9\" (UID: \"e76542af-c648-4aac-8cc1-e07d238c6a2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62kx9" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074408 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e105e7e0-0046-47ca-8a73-c27e385a0301-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tp8zp\" (UID: \"e105e7e0-0046-47ca-8a73-c27e385a0301\") " pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074439 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-csi-data-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074493 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d8a1f03-1c60-409f-85c0-5d2dad7e624c-config-volume\") pod \"dns-default-lj8d5\" (UID: \"0d8a1f03-1c60-409f-85c0-5d2dad7e624c\") " pod="openshift-dns/dns-default-lj8d5" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074499 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d8a1f03-1c60-409f-85c0-5d2dad7e624c-metrics-tls\") pod \"dns-default-lj8d5\" (UID: \"0d8a1f03-1c60-409f-85c0-5d2dad7e624c\") " pod="openshift-dns/dns-default-lj8d5" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074552 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-246nf\" (UniqueName: \"kubernetes.io/projected/62383901-634c-43e5-9177-4be83e12d514-kube-api-access-246nf\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074579 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4f3eb31-20c7-4549-9df4-979045249f81-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7qxdc\" (UID: \"a4f3eb31-20c7-4549-9df4-979045249f81\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074603 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e76542af-c648-4aac-8cc1-e07d238c6a2c-config\") pod \"service-ca-operator-777779d784-62kx9\" (UID: \"e76542af-c648-4aac-8cc1-e07d238c6a2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62kx9" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.074644 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/324736a7-6465-419b-8d2a-0a8e678f5421-signing-key\") pod \"service-ca-9c57cc56f-t4p5v\" (UID: \"324736a7-6465-419b-8d2a-0a8e678f5421\") " pod="openshift-service-ca/service-ca-9c57cc56f-t4p5v" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.075351 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-plugins-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.075826 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/443ac64a-4fb6-4571-8c73-e6cbf9491dbc-certs\") pod \"machine-config-server-5brsg\" (UID: \"443ac64a-4fb6-4571-8c73-e6cbf9491dbc\") " pod="openshift-machine-config-operator/machine-config-server-5brsg" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.075984 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-socket-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.076154 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e105e7e0-0046-47ca-8a73-c27e385a0301-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tp8zp\" (UID: \"e105e7e0-0046-47ca-8a73-c27e385a0301\") " pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.076632 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f3eb31-20c7-4549-9df4-979045249f81-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7qxdc\" (UID: \"a4f3eb31-20c7-4549-9df4-979045249f81\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.077151 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f3eb31-20c7-4549-9df4-979045249f81-config\") pod \"kube-controller-manager-operator-78b949d7b-7qxdc\" (UID: \"a4f3eb31-20c7-4549-9df4-979045249f81\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.077242 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-csi-data-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.077296 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/62383901-634c-43e5-9177-4be83e12d514-registration-dir\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:43 crc kubenswrapper[4755]: E0224 09:58:43.077748 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:43.577734794 +0000 UTC m=+228.034257337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.078814 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/324736a7-6465-419b-8d2a-0a8e678f5421-signing-key\") pod \"service-ca-9c57cc56f-t4p5v\" (UID: \"324736a7-6465-419b-8d2a-0a8e678f5421\") " pod="openshift-service-ca/service-ca-9c57cc56f-t4p5v" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.079308 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/68f685bc-87e6-44fb-97dd-798196ff677d-profile-collector-cert\") pod \"catalog-operator-68c6474976-w47s8\" (UID: \"68f685bc-87e6-44fb-97dd-798196ff677d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.079491 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d8a1f03-1c60-409f-85c0-5d2dad7e624c-metrics-tls\") pod \"dns-default-lj8d5\" (UID: \"0d8a1f03-1c60-409f-85c0-5d2dad7e624c\") " pod="openshift-dns/dns-default-lj8d5" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.080578 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/68f685bc-87e6-44fb-97dd-798196ff677d-srv-cert\") pod \"catalog-operator-68c6474976-w47s8\" (UID: \"68f685bc-87e6-44fb-97dd-798196ff677d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.081182 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a99991-5cb3-4242-b354-ee3908088b65-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-stj9j\" (UID: \"81a99991-5cb3-4242-b354-ee3908088b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.081899 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e76542af-c648-4aac-8cc1-e07d238c6a2c-config\") pod \"service-ca-operator-777779d784-62kx9\" (UID: \"e76542af-c648-4aac-8cc1-e07d238c6a2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62kx9" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.081706 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/324736a7-6465-419b-8d2a-0a8e678f5421-signing-cabundle\") pod \"service-ca-9c57cc56f-t4p5v\" (UID: \"324736a7-6465-419b-8d2a-0a8e678f5421\") " pod="openshift-service-ca/service-ca-9c57cc56f-t4p5v" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.083272 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25dqb\" (UniqueName: \"kubernetes.io/projected/3abe3fcd-61fb-4f6e-b196-dfc60163c86b-kube-api-access-25dqb\") pod \"machine-config-controller-84d6567774-bbm9l\" (UID: \"3abe3fcd-61fb-4f6e-b196-dfc60163c86b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.083761 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/443ac64a-4fb6-4571-8c73-e6cbf9491dbc-node-bootstrap-token\") pod \"machine-config-server-5brsg\" (UID: \"443ac64a-4fb6-4571-8c73-e6cbf9491dbc\") " pod="openshift-machine-config-operator/machine-config-server-5brsg" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.083842 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e76542af-c648-4aac-8cc1-e07d238c6a2c-serving-cert\") pod \"service-ca-operator-777779d784-62kx9\" (UID: \"e76542af-c648-4aac-8cc1-e07d238c6a2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62kx9" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.085326 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/279eff18-be0a-4f07-81fc-69fef8faac6c-secret-volume\") pod \"collect-profiles-29532105-sr449\" (UID: \"279eff18-be0a-4f07-81fc-69fef8faac6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.085864 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e105e7e0-0046-47ca-8a73-c27e385a0301-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tp8zp\" (UID: \"e105e7e0-0046-47ca-8a73-c27e385a0301\") " pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.089846 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a99991-5cb3-4242-b354-ee3908088b65-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-stj9j\" (UID: \"81a99991-5cb3-4242-b354-ee3908088b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.095478 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sprnm\" (UniqueName: \"kubernetes.io/projected/28ea82ea-bfeb-4a67-bac4-a97156c7995b-kube-api-access-sprnm\") pod \"cluster-image-registry-operator-dc59b4c8b-m78zl\" (UID: \"28ea82ea-bfeb-4a67-bac4-a97156c7995b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.095690 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d2406ef-c2fd-495a-82fd-00ef8257e99e-cert\") pod \"ingress-canary-dgmbb\" (UID: \"1d2406ef-c2fd-495a-82fd-00ef8257e99e\") " pod="openshift-ingress-canary/ingress-canary-dgmbb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.119540 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4xml\" (UniqueName: \"kubernetes.io/projected/cf5157db-1776-4520-93cc-3af5f7c91511-kube-api-access-s4xml\") pod \"authentication-operator-69f744f599-smsp7\" (UID: \"cf5157db-1776-4520-93cc-3af5f7c91511\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.134290 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89zfg\" (UniqueName: \"kubernetes.io/projected/2317257d-494c-48b7-a69c-013e8b1d7d81-kube-api-access-89zfg\") pod \"oauth-openshift-558db77b4-srwxs\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.155730 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv2sd\" (UniqueName: \"kubernetes.io/projected/4139cb8d-4ba9-4a77-8858-137229c972db-kube-api-access-bv2sd\") pod \"multus-admission-controller-857f4d67dd-kksxb\" (UID: \"4139cb8d-4ba9-4a77-8858-137229c972db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kksxb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.168214 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.173703 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:43 crc kubenswrapper[4755]: W0224 09:58:43.173892 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d61c075_3ea1_4130_bcab_a207ea44a31a.slice/crio-856b49d28d1f24e9e90f40c417047dff126b12a747d9d79e5d8f156882935a10 WatchSource:0}: Error finding container 856b49d28d1f24e9e90f40c417047dff126b12a747d9d79e5d8f156882935a10: Status 404 returned error can't find the container with id 856b49d28d1f24e9e90f40c417047dff126b12a747d9d79e5d8f156882935a10 Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.175012 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:43 crc kubenswrapper[4755]: E0224 09:58:43.175433 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:43.67541735 +0000 UTC m=+228.131939893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.175971 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9dhx\" (UniqueName: \"kubernetes.io/projected/b75345bf-b93f-471f-9b11-e5c5695e7e6a-kube-api-access-q9dhx\") pod \"router-default-5444994796-jb8zb\" (UID: \"b75345bf-b93f-471f-9b11-e5c5695e7e6a\") " pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.191196 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.199519 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28ea82ea-bfeb-4a67-bac4-a97156c7995b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-m78zl\" (UID: \"28ea82ea-bfeb-4a67-bac4-a97156c7995b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.214328 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.216234 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqq59\" (UniqueName: \"kubernetes.io/projected/f70c68fa-4429-40ac-9d4a-72f5b5086c94-kube-api-access-rqq59\") pod \"package-server-manager-789f6589d5-tft86\" (UID: \"f70c68fa-4429-40ac-9d4a-72f5b5086c94\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.221871 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kksxb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.231618 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.235295 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt499\" (UniqueName: \"kubernetes.io/projected/ee204f54-d95c-4d2d-8b56-2812a6843938-kube-api-access-gt499\") pod \"control-plane-machine-set-operator-78cbb6b69f-4n9rx\" (UID: \"ee204f54-d95c-4d2d-8b56-2812a6843938\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4n9rx" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.238911 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.254485 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rmjk\" (UniqueName: \"kubernetes.io/projected/bdc04075-1afc-4a7b-8b56-72d3ad508da5-kube-api-access-5rmjk\") pod \"etcd-operator-b45778765-8n9ts\" (UID: \"bdc04075-1afc-4a7b-8b56-72d3ad508da5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.276813 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:43 crc kubenswrapper[4755]: E0224 09:58:43.277179 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:43.777168484 +0000 UTC m=+228.233691027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.283315 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mhrgs\" (UID: \"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.300306 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.310959 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd8rg\" (UniqueName: \"kubernetes.io/projected/387127d5-a12c-4dee-96d9-47f8983e0356-kube-api-access-wd8rg\") pod \"packageserver-d55dfcdfc-cltn2\" (UID: \"387127d5-a12c-4dee-96d9-47f8983e0356\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.315050 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4n9rx" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.326822 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.338901 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2wnv\" (UniqueName: \"kubernetes.io/projected/2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0-kube-api-access-z2wnv\") pod \"ingress-operator-5b745b69d9-mhrgs\" (UID: \"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.359973 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff434ccf-df42-4dac-b43c-3f68265d2a7b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-v2srk\" (UID: \"ff434ccf-df42-4dac-b43c-3f68265d2a7b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.375970 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgtvf\" (UniqueName: \"kubernetes.io/projected/9fbb35dd-331f-43cb-8f92-f8a6f1e4ef1b-kube-api-access-pgtvf\") pod \"migrator-59844c95c7-wfrxf\" (UID: \"9fbb35dd-331f-43cb-8f92-f8a6f1e4ef1b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wfrxf" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.378215 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:43 crc kubenswrapper[4755]: E0224 09:58:43.378843 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:43.878827615 +0000 UTC m=+228.335350158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.388611 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.410121 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-srwxs"] Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.416530 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jdxh\" (UniqueName: \"kubernetes.io/projected/d1c02e5f-aebc-4edb-9e99-f26fc84a32ab-kube-api-access-4jdxh\") pod \"downloads-7954f5f757-vhlbl\" (UID: \"d1c02e5f-aebc-4edb-9e99-f26fc84a32ab\") " pod="openshift-console/downloads-7954f5f757-vhlbl" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.429880 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5cb39b3-d948-4b7d-88b4-1c4962e0c35a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kw78\" (UID: \"e5cb39b3-d948-4b7d-88b4-1c4962e0c35a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.437578 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95l97\" (UniqueName: \"kubernetes.io/projected/8694e8ab-019b-41c1-bf7c-49d3ca10f213-kube-api-access-95l97\") pod \"dns-operator-744455d44c-l7frg\" (UID: \"8694e8ab-019b-41c1-bf7c-49d3ca10f213\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7frg" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.456502 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hdwg\" (UniqueName: \"kubernetes.io/projected/b320b4bf-81a0-4d15-a48b-2d24f6014162-kube-api-access-5hdwg\") pod \"openshift-controller-manager-operator-756b6f6bc6-8bwfr\" (UID: \"b320b4bf-81a0-4d15-a48b-2d24f6014162\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.480125 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d" event={"ID":"ae717194-7e0e-4cfb-b662-88c914e8c670","Type":"ContainerStarted","Data":"7abfbc214854209c32a1f8fdaf9beb44487bbb6305bb673f31a1c544f9dbbdb8"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.480183 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d" event={"ID":"ae717194-7e0e-4cfb-b662-88c914e8c670","Type":"ContainerStarted","Data":"971b0aebd5da959992c3a2d313b5d97a959cab63e9da6d57c2becdcfcb903563"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.480443 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:43 crc kubenswrapper[4755]: E0224 09:58:43.480744 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:43.980732594 +0000 UTC m=+228.437255137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.480828 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-vhlbl" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.481932 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hdpw\" (UniqueName: \"kubernetes.io/projected/9ab5d7c4-9b5a-4919-9452-0f906a526bef-kube-api-access-4hdpw\") pod \"olm-operator-6b444d44fb-7dprd\" (UID: \"9ab5d7c4-9b5a-4919-9452-0f906a526bef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.499028 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" event={"ID":"28cbd592-dd09-4070-b22e-f67f1d14dde2","Type":"ContainerStarted","Data":"4fffd43237dfa9c1920f38df3d4609d68cc6595a22ba018bed599a3fed873a21"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.499086 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" event={"ID":"28cbd592-dd09-4070-b22e-f67f1d14dde2","Type":"ContainerStarted","Data":"52f95a710214b4ef44d1cc38e236f836be71f14a3f032d0af570d02b3d629b9c"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.499449 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.499592 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k64dq\" (UniqueName: \"kubernetes.io/projected/81a99991-5cb3-4242-b354-ee3908088b65-kube-api-access-k64dq\") pod \"kube-storage-version-migrator-operator-b67b599dd-stj9j\" (UID: \"81a99991-5cb3-4242-b354-ee3908088b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.505916 4755 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-2g2cw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.505971 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" podUID="28cbd592-dd09-4070-b22e-f67f1d14dde2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.508468 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.521404 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcjxs\" (UniqueName: \"kubernetes.io/projected/279eff18-be0a-4f07-81fc-69fef8faac6c-kube-api-access-gcjxs\") pod \"collect-profiles-29532105-sr449\" (UID: \"279eff18-be0a-4f07-81fc-69fef8faac6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.541439 4755 generic.go:334] "Generic (PLEG): container finished" podID="b7bace2b-7d6c-4332-b9d3-a1848568dfb0" containerID="33297b57285a27e0032d404a47ba1eac1139070e14b3e8f8ff1f92bb477d3009" exitCode=0 Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.541499 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" event={"ID":"b7bace2b-7d6c-4332-b9d3-a1848568dfb0","Type":"ContainerDied","Data":"33297b57285a27e0032d404a47ba1eac1139070e14b3e8f8ff1f92bb477d3009"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.541526 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" event={"ID":"b7bace2b-7d6c-4332-b9d3-a1848568dfb0","Type":"ContainerStarted","Data":"a0ce34ddc58bfd4c3f2848e43a2c67439a9dde2d3d694d417d28d710a65fb304"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.544277 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r8gl\" (UniqueName: \"kubernetes.io/projected/443ac64a-4fb6-4571-8c73-e6cbf9491dbc-kube-api-access-8r8gl\") pod \"machine-config-server-5brsg\" (UID: \"443ac64a-4fb6-4571-8c73-e6cbf9491dbc\") " pod="openshift-machine-config-operator/machine-config-server-5brsg" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.548027 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.553763 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jb8zb" event={"ID":"b75345bf-b93f-471f-9b11-e5c5695e7e6a","Type":"ContainerStarted","Data":"072b0f72e86791917d79fb78b516ff03833768920f9c69b49cd0b33bb9c8f3ca"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.555749 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-l7frg" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.559633 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vthnr\" (UniqueName: \"kubernetes.io/projected/1d2406ef-c2fd-495a-82fd-00ef8257e99e-kube-api-access-vthnr\") pod \"ingress-canary-dgmbb\" (UID: \"1d2406ef-c2fd-495a-82fd-00ef8257e99e\") " pod="openshift-ingress-canary/ingress-canary-dgmbb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.566581 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.571907 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wfrxf" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.577502 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw8tl\" (UniqueName: \"kubernetes.io/projected/e105e7e0-0046-47ca-8a73-c27e385a0301-kube-api-access-sw8tl\") pod \"marketplace-operator-79b997595-tp8zp\" (UID: \"e105e7e0-0046-47ca-8a73-c27e385a0301\") " pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.580933 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.582150 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" event={"ID":"7aedb363-fae6-47ca-8a07-5cd884f8ad8c","Type":"ContainerStarted","Data":"c9abcdfb359dd3bcaec98ca588a2d85b472155b6e33f091d1b53ff1873364a39"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.582181 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" event={"ID":"7aedb363-fae6-47ca-8a07-5cd884f8ad8c","Type":"ContainerStarted","Data":"d1387727746399a8d5578f0e684f4ccb1ac6ddeb632462889811e8442fbed7af"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.582191 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" event={"ID":"7aedb363-fae6-47ca-8a07-5cd884f8ad8c","Type":"ContainerStarted","Data":"1b454384de008949202c45faa01e15ddc73a6a7183aba705ac5984d4e0055dc1"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.583232 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:43 crc kubenswrapper[4755]: E0224 09:58:43.584374 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:44.084353237 +0000 UTC m=+228.540875780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.590180 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-smsp7"] Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.590185 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.595993 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjz2z\" (UniqueName: \"kubernetes.io/projected/e76542af-c648-4aac-8cc1-e07d238c6a2c-kube-api-access-tjz2z\") pod \"service-ca-operator-777779d784-62kx9\" (UID: \"e76542af-c648-4aac-8cc1-e07d238c6a2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62kx9" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.607054 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.611356 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j" event={"ID":"914080f2-11d8-46d8-8ec3-fe3bebb059b6","Type":"ContainerStarted","Data":"faa500759e4e4ae33aff2d14dd7e0c74bb56da47f40b801659dde4c822b1c8c9"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.611389 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j" event={"ID":"914080f2-11d8-46d8-8ec3-fe3bebb059b6","Type":"ContainerStarted","Data":"bf0cb7f7123299e3b9dcc9a4549109ec5ede1ec5b691ab9619e37954f6a37058"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.613231 4755 generic.go:334] "Generic (PLEG): container finished" podID="22c817e6-69d9-401d-86a9-3f52ac3bd891" containerID="c02af72b80d63f633c4ff17a5d30624a9d89543ab6234bd8189a88976bef8f92" exitCode=0 Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.613344 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" event={"ID":"22c817e6-69d9-401d-86a9-3f52ac3bd891","Type":"ContainerDied","Data":"c02af72b80d63f633c4ff17a5d30624a9d89543ab6234bd8189a88976bef8f92"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.613387 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" event={"ID":"22c817e6-69d9-401d-86a9-3f52ac3bd891","Type":"ContainerStarted","Data":"745eb8af997001c074cdf0e09086a9cfa68a471f4402b9d8dda5f8880a089986"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.615920 4755 generic.go:334] "Generic (PLEG): container finished" podID="40eaba2b-f4b1-449d-84d2-b72c1c2b67b0" containerID="96f2e5a22080c26f17932db8cb580cdf7c34d7326680c2e945d15eab262fb373" exitCode=0 Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.616016 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" event={"ID":"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0","Type":"ContainerDied","Data":"96f2e5a22080c26f17932db8cb580cdf7c34d7326680c2e945d15eab262fb373"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.616043 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" event={"ID":"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0","Type":"ContainerStarted","Data":"530dfe18c3510becba4ee55080a6f08f76195b41cb533dc254f32374ce8db250"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.617083 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl"] Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.621118 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wrwg8" event={"ID":"2d61c075-3ea1-4130-bcab-a207ea44a31a","Type":"ContainerStarted","Data":"856b49d28d1f24e9e90f40c417047dff126b12a747d9d79e5d8f156882935a10"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.624949 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" event={"ID":"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7","Type":"ContainerStarted","Data":"0d0560804bdbee0a7e0ddd55cea8f59543e417000d7a40a39d1616d7ffda1de2"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.624981 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" event={"ID":"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7","Type":"ContainerStarted","Data":"0f28b50851621089380b290d71906301e0b1c815868a2e3d9b7b09ee5f446140"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.625933 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.629746 4755 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-d9fml container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.629783 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" podUID="a6fd9421-d674-405c-a6c8-f25ff3c2f9f7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.633295 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" event={"ID":"c4ab2300-1f56-4c83-a156-78efcd90937a","Type":"ContainerStarted","Data":"6c5e43f56eb642e1acf8765aaad4bfa89a5febdbc10d7a2ef85638d4346cb9b5"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.633344 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" event={"ID":"c4ab2300-1f56-4c83-a156-78efcd90937a","Type":"ContainerStarted","Data":"2c9f447924b39ae64789ad6c897d7d4ad8cdc974b91e6c7c2da79803083a7692"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.633355 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" event={"ID":"c4ab2300-1f56-4c83-a156-78efcd90937a","Type":"ContainerStarted","Data":"5c1a3325e32e53464255abdf8cc9f44756c466f0e7998e232dfa23a3a9b1a385"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.635096 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4fnp\" (UniqueName: \"kubernetes.io/projected/68f685bc-87e6-44fb-97dd-798196ff677d-kube-api-access-s4fnp\") pod \"catalog-operator-68c6474976-w47s8\" (UID: \"68f685bc-87e6-44fb-97dd-798196ff677d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.635456 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mdfn\" (UniqueName: \"kubernetes.io/projected/0d8a1f03-1c60-409f-85c0-5d2dad7e624c-kube-api-access-2mdfn\") pod \"dns-default-lj8d5\" (UID: \"0d8a1f03-1c60-409f-85c0-5d2dad7e624c\") " pod="openshift-dns/dns-default-lj8d5" Feb 24 09:58:43 crc kubenswrapper[4755]: W0224 09:58:43.639076 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28ea82ea_bfeb_4a67_bac4_a97156c7995b.slice/crio-5ee6c0270a20fcee8021beeb7bbdc9c0b7b52ab96461a503458e3070de28d188 WatchSource:0}: Error finding container 5ee6c0270a20fcee8021beeb7bbdc9c0b7b52ab96461a503458e3070de28d188: Status 404 returned error can't find the container with id 5ee6c0270a20fcee8021beeb7bbdc9c0b7b52ab96461a503458e3070de28d188 Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.642725 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" event={"ID":"2317257d-494c-48b7-a69c-013e8b1d7d81","Type":"ContainerStarted","Data":"acbd527aff0d913682efb9feff4bf93db2c9d9181e282bd7edab74df098f15d0"} Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.659310 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-246nf\" (UniqueName: \"kubernetes.io/projected/62383901-634c-43e5-9177-4be83e12d514-kube-api-access-246nf\") pod \"csi-hostpathplugin-zmmhb\" (UID: \"62383901-634c-43e5-9177-4be83e12d514\") " pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.669224 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.676263 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-62kx9" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.677750 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4f3eb31-20c7-4549-9df4-979045249f81-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7qxdc\" (UID: \"a4f3eb31-20c7-4549-9df4-979045249f81\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.688917 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:43 crc kubenswrapper[4755]: E0224 09:58:43.691625 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:44.191608663 +0000 UTC m=+228.648131196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.699565 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jw6t\" (UniqueName: \"kubernetes.io/projected/324736a7-6465-419b-8d2a-0a8e678f5421-kube-api-access-8jw6t\") pod \"service-ca-9c57cc56f-t4p5v\" (UID: \"324736a7-6465-419b-8d2a-0a8e678f5421\") " pod="openshift-service-ca/service-ca-9c57cc56f-t4p5v" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.713144 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kksxb"] Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.728029 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.736583 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fqscc"] Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.748395 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l"] Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.748648 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.757479 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lj8d5" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.762754 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4n9rx"] Feb 24 09:58:43 crc kubenswrapper[4755]: W0224 09:58:43.772364 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4139cb8d_4ba9_4a77_8858_137229c972db.slice/crio-988603c651e17ee0890a04664fd59a7b933b1b62e78625d22431b0b58da9423d WatchSource:0}: Error finding container 988603c651e17ee0890a04664fd59a7b933b1b62e78625d22431b0b58da9423d: Status 404 returned error can't find the container with id 988603c651e17ee0890a04664fd59a7b933b1b62e78625d22431b0b58da9423d Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.772981 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86"] Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.787865 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.790976 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:43 crc kubenswrapper[4755]: E0224 09:58:43.796534 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:44.296512157 +0000 UTC m=+228.753034700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.798653 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dgmbb" Feb 24 09:58:43 crc kubenswrapper[4755]: W0224 09:58:43.802853 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc15ecede_c840_4fc8_bc38_a970796c9517.slice/crio-4768d6648673b14ca2b36b7cfa861af87a5db60d2b61ff1e55ec7e5f05e8cfaf WatchSource:0}: Error finding container 4768d6648673b14ca2b36b7cfa861af87a5db60d2b61ff1e55ec7e5f05e8cfaf: Status 404 returned error can't find the container with id 4768d6648673b14ca2b36b7cfa861af87a5db60d2b61ff1e55ec7e5f05e8cfaf Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.804657 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5brsg" Feb 24 09:58:43 crc kubenswrapper[4755]: W0224 09:58:43.805253 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3abe3fcd_61fb_4f6e_b196_dfc60163c86b.slice/crio-62dc7211d05cc822149fec847fa052a9f294c744c181a4ab38ff56fc371f1190 WatchSource:0}: Error finding container 62dc7211d05cc822149fec847fa052a9f294c744c181a4ab38ff56fc371f1190: Status 404 returned error can't find the container with id 62dc7211d05cc822149fec847fa052a9f294c744c181a4ab38ff56fc371f1190 Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.871826 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk"] Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.886333 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk"] Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.917444 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:43 crc kubenswrapper[4755]: E0224 09:58:43.918142 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:44.418115053 +0000 UTC m=+228.874637596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.918570 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" podStartSLOduration=176.918545237 podStartE2EDuration="2m56.918545237s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:43.914615714 +0000 UTC m=+228.371138257" watchObservedRunningTime="2026-02-24 09:58:43.918545237 +0000 UTC m=+228.375067780" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.935136 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.951021 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.959041 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t4p5v" Feb 24 09:58:43 crc kubenswrapper[4755]: I0224 09:58:43.977881 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l7frg"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.007233 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8n9ts"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.018439 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:44 crc kubenswrapper[4755]: E0224 09:58:44.019014 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:44.51899412 +0000 UTC m=+228.975516663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.078327 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.079384 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wfrxf"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.120212 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:44 crc kubenswrapper[4755]: E0224 09:58:44.120622 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:44.62060196 +0000 UTC m=+229.077124503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.132534 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-49z4j" podStartSLOduration=177.132518244 podStartE2EDuration="2m57.132518244s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:44.116625784 +0000 UTC m=+228.573148327" watchObservedRunningTime="2026-02-24 09:58:44.132518244 +0000 UTC m=+228.589040787" Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.133042 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.135481 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.139792 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-vhlbl"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.166645 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2"] Feb 24 09:58:44 crc kubenswrapper[4755]: W0224 09:58:44.167842 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdc04075_1afc_4a7b_8b56_72d3ad508da5.slice/crio-90a1a5e6f3cde3dd91e3febfe1176e04596219c5fa8f6ff9fa0b14944e5d8634 WatchSource:0}: Error finding container 90a1a5e6f3cde3dd91e3febfe1176e04596219c5fa8f6ff9fa0b14944e5d8634: Status 404 returned error can't find the container with id 90a1a5e6f3cde3dd91e3febfe1176e04596219c5fa8f6ff9fa0b14944e5d8634 Feb 24 09:58:44 crc kubenswrapper[4755]: W0224 09:58:44.180876 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fbb35dd_331f_43cb_8f92_f8a6f1e4ef1b.slice/crio-ca46a751b962fe7e00f247202b22fa830e10f0955fc8896201abff658d099c2b WatchSource:0}: Error finding container ca46a751b962fe7e00f247202b22fa830e10f0955fc8896201abff658d099c2b: Status 404 returned error can't find the container with id ca46a751b962fe7e00f247202b22fa830e10f0955fc8896201abff658d099c2b Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.208803 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.220868 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:44 crc kubenswrapper[4755]: E0224 09:58:44.221244 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:44.721228978 +0000 UTC m=+229.177751511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.261250 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.327758 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:44 crc kubenswrapper[4755]: E0224 09:58:44.328542 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:44.828530237 +0000 UTC m=+229.285052780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.431000 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:44 crc kubenswrapper[4755]: E0224 09:58:44.431496 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:44.931421586 +0000 UTC m=+229.387944129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.431581 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:44 crc kubenswrapper[4755]: E0224 09:58:44.431892 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:44.931880491 +0000 UTC m=+229.388403024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.534133 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.535953 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:44 crc kubenswrapper[4755]: E0224 09:58:44.537184 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:45.037150895 +0000 UTC m=+229.493673438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.541716 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tp8zp"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.641501 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:44 crc kubenswrapper[4755]: E0224 09:58:44.641799 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:45.14178807 +0000 UTC m=+229.598310613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.649447 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41052: no serving certificate available for the kubelet" Feb 24 09:58:44 crc kubenswrapper[4755]: W0224 09:58:44.683250 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode105e7e0_0046_47ca_8a73_c27e385a0301.slice/crio-db40bc7cfd6b26702365a5dc871e266bf888e8022fe4cb050c704bea14228ee2 WatchSource:0}: Error finding container db40bc7cfd6b26702365a5dc871e266bf888e8022fe4cb050c704bea14228ee2: Status 404 returned error can't find the container with id db40bc7cfd6b26702365a5dc871e266bf888e8022fe4cb050c704bea14228ee2 Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.714696 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41056: no serving certificate available for the kubelet" Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.722424 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wfrxf" event={"ID":"9fbb35dd-331f-43cb-8f92-f8a6f1e4ef1b","Type":"ContainerStarted","Data":"ca46a751b962fe7e00f247202b22fa830e10f0955fc8896201abff658d099c2b"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.743371 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" event={"ID":"cf5157db-1776-4520-93cc-3af5f7c91511","Type":"ContainerStarted","Data":"ebd00a1364e9bcec764eb72b7710bd2a4c7c84f846dfe290b424fae547c2f1e0"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.743990 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:44 crc kubenswrapper[4755]: E0224 09:58:44.744327 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:45.244309028 +0000 UTC m=+229.700831571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.813910 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41058: no serving certificate available for the kubelet" Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.816748 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" event={"ID":"28ea82ea-bfeb-4a67-bac4-a97156c7995b","Type":"ContainerStarted","Data":"31f1671aa6d3eed56369a24b35d5c10135de626cb0197abf79e84093df3ee773"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.816782 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" event={"ID":"28ea82ea-bfeb-4a67-bac4-a97156c7995b","Type":"ContainerStarted","Data":"5ee6c0270a20fcee8021beeb7bbdc9c0b7b52ab96461a503458e3070de28d188"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.841316 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr" event={"ID":"b320b4bf-81a0-4d15-a48b-2d24f6014162","Type":"ContainerStarted","Data":"c4da65a6318367b2266d47ac7d7252f410151781d1ed2af3689fa8737012916e"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.848698 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" event={"ID":"9ab5d7c4-9b5a-4919-9452-0f906a526bef","Type":"ContainerStarted","Data":"aafebf74ff39a696bf50e2d3c81e3034a95e2ef0f4ea02059c579073a0aea95b"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.849751 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:44 crc kubenswrapper[4755]: E0224 09:58:44.851673 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:45.351661528 +0000 UTC m=+229.808184071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.859237 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" event={"ID":"279eff18-be0a-4f07-81fc-69fef8faac6c","Type":"ContainerStarted","Data":"04b28e9ecb34f28164c35b74bdb697cb4014342fa246003c14180cd1b3af31bb"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.875014 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jb8zb" event={"ID":"b75345bf-b93f-471f-9b11-e5c5695e7e6a","Type":"ContainerStarted","Data":"fd23d8d05501e1114bc0eb366538533b48d1c2f23d7c01de170f24eadcd164b0"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.896726 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-62kx9"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.899737 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t4p5v"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.906504 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wrwg8" event={"ID":"2d61c075-3ea1-4130-bcab-a207ea44a31a","Type":"ContainerStarted","Data":"072381fed4b616c2fa6698de18db2dd0a208b10deab6f5a10191e6b8685860fb"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.907531 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.908447 4755 patch_prober.go:28] interesting pod/console-operator-58897d9998-wrwg8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.908498 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wrwg8" podUID="2d61c075-3ea1-4130-bcab-a207ea44a31a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.911304 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" event={"ID":"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0","Type":"ContainerStarted","Data":"e3a4a36fd7305d4a3a133c1f8bf87d1e71e98c52970b1abb34a0872da8c57323"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.912357 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4n9rx" event={"ID":"ee204f54-d95c-4d2d-8b56-2812a6843938","Type":"ContainerStarted","Data":"ef7259104294d3ea7036a646d3b98d7a16a68f12f2ac0fba06dc21d3dabd0eb8"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.920226 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-w7mwr" podStartSLOduration=176.920205039 podStartE2EDuration="2m56.920205039s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:44.919515328 +0000 UTC m=+229.376037861" watchObservedRunningTime="2026-02-24 09:58:44.920205039 +0000 UTC m=+229.376727582" Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.926284 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41070: no serving certificate available for the kubelet" Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.931185 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fqscc" event={"ID":"c15ecede-c840-4fc8-bc38-a970796c9517","Type":"ContainerStarted","Data":"4768d6648673b14ca2b36b7cfa861af87a5db60d2b61ff1e55ec7e5f05e8cfaf"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.934832 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zmmhb"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.943558 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" event={"ID":"2317257d-494c-48b7-a69c-013e8b1d7d81","Type":"ContainerStarted","Data":"9f14a4e345cfd7f30855ba0f74b783b45dd9b472aeaa987e1debcd6d90a99cd3"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.946727 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.948132 4755 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-srwxs container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.948184 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" podUID="2317257d-494c-48b7-a69c-013e8b1d7d81" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.950750 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:44 crc kubenswrapper[4755]: E0224 09:58:44.952241 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:45.452223195 +0000 UTC m=+229.908745728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.952389 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:44 crc kubenswrapper[4755]: E0224 09:58:44.953624 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:45.453611999 +0000 UTC m=+229.910134542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.956766 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dgmbb"] Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.959927 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78" event={"ID":"e5cb39b3-d948-4b7d-88b4-1c4962e0c35a","Type":"ContainerStarted","Data":"78b44f0145687c2a5429c67c077a441ada8ed5e0cf17b830d62239c88e461184"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.970251 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk" event={"ID":"ff434ccf-df42-4dac-b43c-3f68265d2a7b","Type":"ContainerStarted","Data":"9b3fea00096f0b3586dfbe026464dc1f29b3185bd987c15018a275baa5565973"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.985730 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" event={"ID":"22c817e6-69d9-401d-86a9-3f52ac3bd891","Type":"ContainerStarted","Data":"708c5a92d8ad559854def4a1c204636616770e6e2493ba85a8db98ba82a0532f"} Feb 24 09:58:44 crc kubenswrapper[4755]: I0224 09:58:44.986828 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.021342 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" event={"ID":"bdc04075-1afc-4a7b-8b56-72d3ad508da5","Type":"ContainerStarted","Data":"90a1a5e6f3cde3dd91e3febfe1176e04596219c5fa8f6ff9fa0b14944e5d8634"} Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.023592 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41086: no serving certificate available for the kubelet" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.049417 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8"] Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.053800 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:45 crc kubenswrapper[4755]: E0224 09:58:45.053899 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:45.553880876 +0000 UTC m=+230.010403419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.054907 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:45 crc kubenswrapper[4755]: E0224 09:58:45.057640 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:45.557629233 +0000 UTC m=+230.014151776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.066678 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lj8d5"] Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.074569 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" event={"ID":"31ac03e9-2f2e-4103-bcc8-f660c98d5a39","Type":"ContainerStarted","Data":"cd47cae41e89ad057f6fd008869b3bd9a8744cae76329a4af8bb3c3ec0677937"} Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.123799 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mtqp" podStartSLOduration=178.123784791 podStartE2EDuration="2m58.123784791s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:45.122648844 +0000 UTC m=+229.579171387" watchObservedRunningTime="2026-02-24 09:58:45.123784791 +0000 UTC m=+229.580307334" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.134979 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vhlbl" event={"ID":"d1c02e5f-aebc-4edb-9e99-f26fc84a32ab","Type":"ContainerStarted","Data":"3b3eaedf659e5a3b312412b4ac78bf32d7787f3a344dcd637791c0a31a6b8011"} Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.139818 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41102: no serving certificate available for the kubelet" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.144833 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" event={"ID":"3abe3fcd-61fb-4f6e-b196-dfc60163c86b","Type":"ContainerStarted","Data":"62dc7211d05cc822149fec847fa052a9f294c744c181a4ab38ff56fc371f1190"} Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.157487 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" event={"ID":"b7bace2b-7d6c-4332-b9d3-a1848568dfb0","Type":"ContainerStarted","Data":"af7e1736b85a69b15ac47356295e994f71c0c26674d85e50b479ebc1541536af"} Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.159083 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:45 crc kubenswrapper[4755]: E0224 09:58:45.159421 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:45.659404598 +0000 UTC m=+230.115927141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.177964 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5brsg" event={"ID":"443ac64a-4fb6-4571-8c73-e6cbf9491dbc","Type":"ContainerStarted","Data":"59300870cad23d5dfa01312237fb51e76125c650da2752412bc2c0f437d456b2"} Feb 24 09:58:45 crc kubenswrapper[4755]: W0224 09:58:45.181119 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62383901_634c_43e5_9177_4be83e12d514.slice/crio-9264fe8273444a39060490e789b94a0ca3136d6a08651cf936a60b86fa76f345 WatchSource:0}: Error finding container 9264fe8273444a39060490e789b94a0ca3136d6a08651cf936a60b86fa76f345: Status 404 returned error can't find the container with id 9264fe8273444a39060490e789b94a0ca3136d6a08651cf936a60b86fa76f345 Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.182107 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l7frg" event={"ID":"8694e8ab-019b-41c1-bf7c-49d3ca10f213","Type":"ContainerStarted","Data":"4dd917d44a7764d10f8f625bb502dc2f497d0e71216b208bf63a629d2790b0a6"} Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.182906 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc"] Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.183668 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" event={"ID":"387127d5-a12c-4dee-96d9-47f8983e0356","Type":"ContainerStarted","Data":"e5b036332b07bf118640bfa3a4cba15f91320cc5ea802e8041f99581029c251c"} Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.215974 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86" event={"ID":"f70c68fa-4429-40ac-9d4a-72f5b5086c94","Type":"ContainerStarted","Data":"dd8db8d1f9d51c6e485b388e8f2e9453c0af61a1b8db5e0f051f43d0550fd40f"} Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.221833 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.232299 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:45 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:45 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:45 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.232365 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.234329 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41116: no serving certificate available for the kubelet" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.250703 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d" event={"ID":"ae717194-7e0e-4cfb-b662-88c914e8c670","Type":"ContainerStarted","Data":"007bb27171bfbf794d17e442207643e8bdab736811f3cc978cafdae6bfae5253"} Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.260661 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:45 crc kubenswrapper[4755]: E0224 09:58:45.262382 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:45.76236537 +0000 UTC m=+230.218887913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.277939 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kksxb" event={"ID":"4139cb8d-4ba9-4a77-8858-137229c972db","Type":"ContainerStarted","Data":"988603c651e17ee0890a04664fd59a7b933b1b62e78625d22431b0b58da9423d"} Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.284214 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.287585 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.340880 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" podStartSLOduration=177.340865954 podStartE2EDuration="2m57.340865954s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:45.33785895 +0000 UTC m=+229.794381493" watchObservedRunningTime="2026-02-24 09:58:45.340865954 +0000 UTC m=+229.797388497" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.349627 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41128: no serving certificate available for the kubelet" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.362634 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:45 crc kubenswrapper[4755]: E0224 09:58:45.362961 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:45.862945517 +0000 UTC m=+230.319468060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.446936 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" podStartSLOduration=178.446915073 podStartE2EDuration="2m58.446915073s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:45.434610008 +0000 UTC m=+229.891132551" watchObservedRunningTime="2026-02-24 09:58:45.446915073 +0000 UTC m=+229.903437616" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.465805 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:45 crc kubenswrapper[4755]: E0224 09:58:45.468829 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:45.96881186 +0000 UTC m=+230.425334403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.487446 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fqscc" podStartSLOduration=178.487425415 podStartE2EDuration="2m58.487425415s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:45.485774563 +0000 UTC m=+229.942297106" watchObservedRunningTime="2026-02-24 09:58:45.487425415 +0000 UTC m=+229.943947958" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.566645 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:45 crc kubenswrapper[4755]: E0224 09:58:45.567199 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:46.067186259 +0000 UTC m=+230.523708802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.597775 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-wrwg8" podStartSLOduration=178.597755008 podStartE2EDuration="2m58.597755008s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:45.597331335 +0000 UTC m=+230.053853878" watchObservedRunningTime="2026-02-24 09:58:45.597755008 +0000 UTC m=+230.054277551" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.600386 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t522d" podStartSLOduration=178.60037254 podStartE2EDuration="2m58.60037254s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:45.562856863 +0000 UTC m=+230.019379406" watchObservedRunningTime="2026-02-24 09:58:45.60037254 +0000 UTC m=+230.056895083" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.669188 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:45 crc kubenswrapper[4755]: E0224 09:58:45.669517 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:46.169505881 +0000 UTC m=+230.626028424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.688802 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m78zl" podStartSLOduration=178.688786056 podStartE2EDuration="2m58.688786056s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:45.686707941 +0000 UTC m=+230.143230484" watchObservedRunningTime="2026-02-24 09:58:45.688786056 +0000 UTC m=+230.145308599" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.758973 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" podStartSLOduration=178.758955379 podStartE2EDuration="2m58.758955379s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:45.722451963 +0000 UTC m=+230.178974506" watchObservedRunningTime="2026-02-24 09:58:45.758955379 +0000 UTC m=+230.215477922" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.772097 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:45 crc kubenswrapper[4755]: E0224 09:58:45.772207 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:46.272191094 +0000 UTC m=+230.728713637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.772418 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:45 crc kubenswrapper[4755]: E0224 09:58:45.772904 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:46.272896876 +0000 UTC m=+230.729419419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.793489 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4n9rx" podStartSLOduration=177.793472803 podStartE2EDuration="2m57.793472803s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:45.791594653 +0000 UTC m=+230.248117196" watchObservedRunningTime="2026-02-24 09:58:45.793472803 +0000 UTC m=+230.249995346" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.793673 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-jb8zb" podStartSLOduration=178.793667618 podStartE2EDuration="2m58.793667618s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:45.75996209 +0000 UTC m=+230.216484623" watchObservedRunningTime="2026-02-24 09:58:45.793667618 +0000 UTC m=+230.250190161" Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.874158 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:45 crc kubenswrapper[4755]: E0224 09:58:45.874418 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:46.374396933 +0000 UTC m=+230.830919476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.874679 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:45 crc kubenswrapper[4755]: E0224 09:58:45.874968 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:46.37495498 +0000 UTC m=+230.831477523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.975049 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:45 crc kubenswrapper[4755]: E0224 09:58:45.975176 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:46.475149855 +0000 UTC m=+230.931672388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:45 crc kubenswrapper[4755]: I0224 09:58:45.975407 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:45 crc kubenswrapper[4755]: E0224 09:58:45.975702 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:46.475690122 +0000 UTC m=+230.932212665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.032020 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41138: no serving certificate available for the kubelet" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.078888 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:46 crc kubenswrapper[4755]: E0224 09:58:46.079482 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:46.579466889 +0000 UTC m=+231.035989432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.188294 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:46 crc kubenswrapper[4755]: E0224 09:58:46.195365 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:46.695341477 +0000 UTC m=+231.151864020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.235333 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:46 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:46 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:46 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.235414 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.289420 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:46 crc kubenswrapper[4755]: E0224 09:58:46.290085 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:46.79005468 +0000 UTC m=+231.246577223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.365408 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" event={"ID":"62383901-634c-43e5-9177-4be83e12d514","Type":"ContainerStarted","Data":"9264fe8273444a39060490e789b94a0ca3136d6a08651cf936a60b86fa76f345"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.385657 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" event={"ID":"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0","Type":"ContainerStarted","Data":"fdd4b1116839f2b01a1ea76322e943943989e9da071c834fe7f839991f76accd"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.393713 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:46 crc kubenswrapper[4755]: E0224 09:58:46.394017 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:46.894006143 +0000 UTC m=+231.350528676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.411344 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dgmbb" event={"ID":"1d2406ef-c2fd-495a-82fd-00ef8257e99e","Type":"ContainerStarted","Data":"13a99ddbbea734c950acbb23ccb85c5689ac103d776f82d1bd9db44014f7e4c6"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.411670 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dgmbb" event={"ID":"1d2406ef-c2fd-495a-82fd-00ef8257e99e","Type":"ContainerStarted","Data":"7accdd6df999ae9eb2809d73c37181a727f74b8f15a632c66b161c169ec1dc6b"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.427989 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5brsg" event={"ID":"443ac64a-4fb6-4571-8c73-e6cbf9491dbc","Type":"ContainerStarted","Data":"94d52a63dbd2ed626c9dbc6cf82e3274ede2f3d33f6afe02d33758a7074fbae8"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.440689 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4n9rx" event={"ID":"ee204f54-d95c-4d2d-8b56-2812a6843938","Type":"ContainerStarted","Data":"ca1d0553b16de25cbb7ea5e7a05fccdd5f976f5d75bc8fc57bbbbef020d0b188"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.466740 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fqscc" event={"ID":"c15ecede-c840-4fc8-bc38-a970796c9517","Type":"ContainerStarted","Data":"96a4b69f601bc685b906f4cacf5e70c36c7d9bac9b10756a48e0e4deb9d59523"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.478776 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lj8d5" event={"ID":"0d8a1f03-1c60-409f-85c0-5d2dad7e624c","Type":"ContainerStarted","Data":"d9d4c9d5f220cafae413de17ce9d886fc298ec0eb77e50cf68b60c41cdd7b92b"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.512004 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" event={"ID":"cf5157db-1776-4520-93cc-3af5f7c91511","Type":"ContainerStarted","Data":"3f3545f3d1e007a545afbf895d901cb3ae705a41efa6e52a5238122a56458dea"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.512878 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:46 crc kubenswrapper[4755]: E0224 09:58:46.514119 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:47.014105683 +0000 UTC m=+231.470628226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.551047 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l7frg" event={"ID":"8694e8ab-019b-41c1-bf7c-49d3ca10f213","Type":"ContainerStarted","Data":"542d6557c9eb24b6774f676f221feff77bf954bb9b90b732927a6f68f998c9c5"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.572464 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" event={"ID":"31ac03e9-2f2e-4103-bcc8-f660c98d5a39","Type":"ContainerStarted","Data":"59763ca19ae41f0ad6ee8a8614cf8b79da20c4e6353b535a89cfdc41f6651036"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.572501 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" event={"ID":"31ac03e9-2f2e-4103-bcc8-f660c98d5a39","Type":"ContainerStarted","Data":"610ada6227e0d15256aae28ab57c8f54efc90b4cb086915cf4d1c586352b2d7e"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.609773 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" event={"ID":"bdc04075-1afc-4a7b-8b56-72d3ad508da5","Type":"ContainerStarted","Data":"5e037e2a576cd85f3de0e3e4b6629ae6dcd8e1b8bc42e2048d62673053559499"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.614588 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:46 crc kubenswrapper[4755]: E0224 09:58:46.617150 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:47.117108526 +0000 UTC m=+231.573631069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.650252 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kksxb" event={"ID":"4139cb8d-4ba9-4a77-8858-137229c972db","Type":"ContainerStarted","Data":"c65d1d0ecea55c588180ce9e89f41da1a32e57b89087d8638cd724987b782837"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.657169 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-l7frg" podStartSLOduration=179.657150853 podStartE2EDuration="2m59.657150853s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:46.655631756 +0000 UTC m=+231.112154299" watchObservedRunningTime="2026-02-24 09:58:46.657150853 +0000 UTC m=+231.113673396" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.667774 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t4p5v" event={"ID":"324736a7-6465-419b-8d2a-0a8e678f5421","Type":"ContainerStarted","Data":"de18f807c8d78d73658811dcfdd7fec4e41ac093c83313fd71849b53db0bcb16"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.668047 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t4p5v" event={"ID":"324736a7-6465-419b-8d2a-0a8e678f5421","Type":"ContainerStarted","Data":"bf82f32e450edde7fc389bb3fafa0871ba54cf59f0608b0067a40d43f7f33fcf"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.685474 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-8n9ts" podStartSLOduration=179.685456662 podStartE2EDuration="2m59.685456662s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:46.683426229 +0000 UTC m=+231.139948772" watchObservedRunningTime="2026-02-24 09:58:46.685456662 +0000 UTC m=+231.141979205" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.714087 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p82hk" podStartSLOduration=178.71404309 podStartE2EDuration="2m58.71404309s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:46.712615584 +0000 UTC m=+231.169138127" watchObservedRunningTime="2026-02-24 09:58:46.71404309 +0000 UTC m=+231.170565633" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.719619 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:46 crc kubenswrapper[4755]: E0224 09:58:46.721081 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:47.221045879 +0000 UTC m=+231.677568422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.752438 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" event={"ID":"9ab5d7c4-9b5a-4919-9452-0f906a526bef","Type":"ContainerStarted","Data":"12a4f230534e3cf74e24be021ad2b8b3fb13754b5cb1a881d9209e229eaf9356"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.753255 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.759839 4755 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7dprd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.759894 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" podUID="9ab5d7c4-9b5a-4919-9452-0f906a526bef" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.761758 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-dgmbb" podStartSLOduration=6.761745687 podStartE2EDuration="6.761745687s" podCreationTimestamp="2026-02-24 09:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:46.760342243 +0000 UTC m=+231.216864786" watchObservedRunningTime="2026-02-24 09:58:46.761745687 +0000 UTC m=+231.218268230" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.781437 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86" event={"ID":"f70c68fa-4429-40ac-9d4a-72f5b5086c94","Type":"ContainerStarted","Data":"2ebea244bdf19fe71ea9ea8790b8e98f4371788804f80f25124a8f23e07e13b3"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.781499 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86" event={"ID":"f70c68fa-4429-40ac-9d4a-72f5b5086c94","Type":"ContainerStarted","Data":"27a0fdc7566ce74732dcf8851e833909bf38c76972b7f8206df7120ddd6071d0"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.783716 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.801544 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-62kx9" event={"ID":"e76542af-c648-4aac-8cc1-e07d238c6a2c","Type":"ContainerStarted","Data":"1d6ab2305624d5ece57cf3ded01c4a7ad6474872e5cb491aae8a49b994dafcce"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.801592 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-62kx9" event={"ID":"e76542af-c648-4aac-8cc1-e07d238c6a2c","Type":"ContainerStarted","Data":"cc82f7f467b035b65c2c0816016b5026ff32e44b01372913e147a74961086c6f"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.813572 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk" event={"ID":"ff434ccf-df42-4dac-b43c-3f68265d2a7b","Type":"ContainerStarted","Data":"3b8ad1f240b41d49a9de66b2b950bb4d7f07e174966d8d5ad9322bc05fedd2e6"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.826825 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:46 crc kubenswrapper[4755]: E0224 09:58:46.828798 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:47.328786432 +0000 UTC m=+231.785308975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.841844 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-smsp7" podStartSLOduration=179.841828221 podStartE2EDuration="2m59.841828221s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:46.839171647 +0000 UTC m=+231.295694180" watchObservedRunningTime="2026-02-24 09:58:46.841828221 +0000 UTC m=+231.298350764" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.842049 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5brsg" podStartSLOduration=6.842044867 podStartE2EDuration="6.842044867s" podCreationTimestamp="2026-02-24 09:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:46.809799865 +0000 UTC m=+231.266322408" watchObservedRunningTime="2026-02-24 09:58:46.842044867 +0000 UTC m=+231.298567410" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.865750 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" event={"ID":"3abe3fcd-61fb-4f6e-b196-dfc60163c86b","Type":"ContainerStarted","Data":"77bfcac5ca28b55643e3df359e6966ea2037bf283d6883ebd3fab28423121b6d"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.865794 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" event={"ID":"3abe3fcd-61fb-4f6e-b196-dfc60163c86b","Type":"ContainerStarted","Data":"659aff9409ced7bfc2e48c21edbd8bf0a99393c5d739f72c7426347f0061b0d0"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.903119 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kksxb" podStartSLOduration=178.903104384 podStartE2EDuration="2m58.903104384s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:46.895433553 +0000 UTC m=+231.351956096" watchObservedRunningTime="2026-02-24 09:58:46.903104384 +0000 UTC m=+231.359626927" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.913394 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j" event={"ID":"81a99991-5cb3-4242-b354-ee3908088b65","Type":"ContainerStarted","Data":"5301cf2f9c58831e7aa33ae6c5dd26a07c4a22f8019c007095f78449a4eb15a4"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.913435 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j" event={"ID":"81a99991-5cb3-4242-b354-ee3908088b65","Type":"ContainerStarted","Data":"571139a4edf050498515b63710d0a227d797701e7ee080da0bd4f1b06b8031cb"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.927557 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:46 crc kubenswrapper[4755]: E0224 09:58:46.928255 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:47.428239663 +0000 UTC m=+231.884762206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.943980 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-62kx9" podStartSLOduration=178.943959607 podStartE2EDuration="2m58.943959607s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:46.941463548 +0000 UTC m=+231.397986091" watchObservedRunningTime="2026-02-24 09:58:46.943959607 +0000 UTC m=+231.400482150" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.956316 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" event={"ID":"68f685bc-87e6-44fb-97dd-798196ff677d","Type":"ContainerStarted","Data":"c2a6ca2b400e2fa8ad177faa0431178799b1260228ec3f18a4909fc6847f69f5"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.956354 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.963839 4755 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-w47s8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.963883 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" podUID="68f685bc-87e6-44fb-97dd-798196ff677d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.975599 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v2srk" podStartSLOduration=179.975582129 podStartE2EDuration="2m59.975582129s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:46.9737072 +0000 UTC m=+231.430229743" watchObservedRunningTime="2026-02-24 09:58:46.975582129 +0000 UTC m=+231.432104672" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.987452 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" event={"ID":"e105e7e0-0046-47ca-8a73-c27e385a0301","Type":"ContainerStarted","Data":"3135b9ca423f992987cdab3ae9179ef151afbc6ec38de245af48f4c5b16f3b94"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.987495 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" event={"ID":"e105e7e0-0046-47ca-8a73-c27e385a0301","Type":"ContainerStarted","Data":"db40bc7cfd6b26702365a5dc871e266bf888e8022fe4cb050c704bea14228ee2"} Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.988350 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.994224 4755 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tp8zp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 24 09:58:46 crc kubenswrapper[4755]: I0224 09:58:46.994266 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" podUID="e105e7e0-0046-47ca-8a73-c27e385a0301" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.004019 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vhlbl" event={"ID":"d1c02e5f-aebc-4edb-9e99-f26fc84a32ab","Type":"ContainerStarted","Data":"657ec663f4d70c9138a544df3c8fa848580f3d9fe06dda7236ea0e4a6f428041"} Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.004789 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-vhlbl" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.010604 4755 patch_prober.go:28] interesting pod/downloads-7954f5f757-vhlbl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.010646 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vhlbl" podUID="d1c02e5f-aebc-4edb-9e99-f26fc84a32ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.029398 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:47 crc kubenswrapper[4755]: E0224 09:58:47.030472 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:47.530453702 +0000 UTC m=+231.986976235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.031953 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wfrxf" event={"ID":"9fbb35dd-331f-43cb-8f92-f8a6f1e4ef1b","Type":"ContainerStarted","Data":"7fe147781374cca9e13e2618ad02d910df46b597774c89cbfdbbb06e743f8756"} Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.031988 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wfrxf" event={"ID":"9fbb35dd-331f-43cb-8f92-f8a6f1e4ef1b","Type":"ContainerStarted","Data":"9b3c3b7f78708eb4a58b134932b6196a72cd7c073e0f7762125d8f41c2abbdce"} Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.074138 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" event={"ID":"387127d5-a12c-4dee-96d9-47f8983e0356","Type":"ContainerStarted","Data":"6805e7b86bad298e047d550835b2555114e8b1c2212040330546f395120609ef"} Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.075212 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.100000 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86" podStartSLOduration=179.099979274 podStartE2EDuration="2m59.099979274s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.064257433 +0000 UTC m=+231.520779976" watchObservedRunningTime="2026-02-24 09:58:47.099979274 +0000 UTC m=+231.556501817" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.102333 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr" event={"ID":"b320b4bf-81a0-4d15-a48b-2d24f6014162","Type":"ContainerStarted","Data":"f9d299f2449d44cf5a826dc00acfbe1adb162ac04122156ad05eec2785cf1f1d"} Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.118664 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc" event={"ID":"a4f3eb31-20c7-4549-9df4-979045249f81","Type":"ContainerStarted","Data":"75f8368c5d70b3ef10d69613fdb874509847769ed0dd22b0bc8865d05b37fc81"} Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.123815 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" event={"ID":"40eaba2b-f4b1-449d-84d2-b72c1c2b67b0","Type":"ContainerStarted","Data":"3f9add9c0d03ad2455a129c07377a811d17f91f0bf8a97989242fe30e2caaecd"} Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.132484 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:47 crc kubenswrapper[4755]: E0224 09:58:47.132577 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:47.632558747 +0000 UTC m=+232.089081290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.133422 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.134956 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78" event={"ID":"e5cb39b3-d948-4b7d-88b4-1c4962e0c35a","Type":"ContainerStarted","Data":"0d3df0d3ac98493a945c1a6217c326f51c2b3c5156b6f2e9d614934e3ab7dbb2"} Feb 24 09:58:47 crc kubenswrapper[4755]: E0224 09:58:47.135956 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:47.635943963 +0000 UTC m=+232.092466506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.142574 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" podStartSLOduration=179.142559321 podStartE2EDuration="2m59.142559321s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.103191905 +0000 UTC m=+231.559714448" watchObservedRunningTime="2026-02-24 09:58:47.142559321 +0000 UTC m=+231.599081864" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.166486 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" event={"ID":"279eff18-be0a-4f07-81fc-69fef8faac6c","Type":"ContainerStarted","Data":"79564065fbe3403c26cf582934dc2e1bcb7d67b6f7c12c00cba81f1397848543"} Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.182601 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-wrwg8" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.190942 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-t4p5v" podStartSLOduration=179.190911448 podStartE2EDuration="2m59.190911448s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.141961662 +0000 UTC m=+231.598484205" watchObservedRunningTime="2026-02-24 09:58:47.190911448 +0000 UTC m=+231.647433991" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.194368 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" podStartSLOduration=179.194162711 podStartE2EDuration="2m59.194162711s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.191136575 +0000 UTC m=+231.647659118" watchObservedRunningTime="2026-02-24 09:58:47.194162711 +0000 UTC m=+231.650685254" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.197438 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.217320 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:47 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:47 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:47 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.217381 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.234529 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:47 crc kubenswrapper[4755]: E0224 09:58:47.237577 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:47.737557843 +0000 UTC m=+232.194080386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.299306 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" podStartSLOduration=179.299287041 podStartE2EDuration="2m59.299287041s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.295071528 +0000 UTC m=+231.751594081" watchObservedRunningTime="2026-02-24 09:58:47.299287041 +0000 UTC m=+231.755809584" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.331463 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cq9sm" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.331529 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" podStartSLOduration=179.331509732 podStartE2EDuration="2m59.331509732s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.329596642 +0000 UTC m=+231.786119185" watchObservedRunningTime="2026-02-24 09:58:47.331509732 +0000 UTC m=+231.788032275" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.336405 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:47 crc kubenswrapper[4755]: E0224 09:58:47.336834 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:47.836813428 +0000 UTC m=+232.293335971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.404361 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41144: no serving certificate available for the kubelet" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.426921 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" podStartSLOduration=179.426904237 podStartE2EDuration="2m59.426904237s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.425650347 +0000 UTC m=+231.882172890" watchObservedRunningTime="2026-02-24 09:58:47.426904237 +0000 UTC m=+231.883426780" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.427289 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stj9j" podStartSLOduration=179.427284959 podStartE2EDuration="2m59.427284959s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.398201065 +0000 UTC m=+231.854723598" watchObservedRunningTime="2026-02-24 09:58:47.427284959 +0000 UTC m=+231.883807502" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.438916 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:47 crc kubenswrapper[4755]: E0224 09:58:47.439257 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:47.939242734 +0000 UTC m=+232.395765277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.439772 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.439798 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.539875 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8bwfr" podStartSLOduration=180.539856292 podStartE2EDuration="3m0.539856292s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.538566372 +0000 UTC m=+231.995088915" watchObservedRunningTime="2026-02-24 09:58:47.539856292 +0000 UTC m=+231.996378835" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.540222 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-vhlbl" podStartSLOduration=180.540217744 podStartE2EDuration="3m0.540217744s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.495999805 +0000 UTC m=+231.952522348" watchObservedRunningTime="2026-02-24 09:58:47.540217744 +0000 UTC m=+231.996740287" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.541608 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:47 crc kubenswrapper[4755]: E0224 09:58:47.541895 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:48.041883996 +0000 UTC m=+232.498406539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.568939 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wfrxf" podStartSLOduration=179.568923034 podStartE2EDuration="2m59.568923034s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.568389698 +0000 UTC m=+232.024912241" watchObservedRunningTime="2026-02-24 09:58:47.568923034 +0000 UTC m=+232.025445567" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.608967 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc" podStartSLOduration=180.608949551 podStartE2EDuration="3m0.608949551s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.607383462 +0000 UTC m=+232.063906005" watchObservedRunningTime="2026-02-24 09:58:47.608949551 +0000 UTC m=+232.065472094" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.642391 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:47 crc kubenswrapper[4755]: E0224 09:58:47.642747 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:48.142731372 +0000 UTC m=+232.599253915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.685080 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kw78" podStartSLOduration=180.6850544 podStartE2EDuration="3m0.6850544s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.64267721 +0000 UTC m=+232.099199753" watchObservedRunningTime="2026-02-24 09:58:47.6850544 +0000 UTC m=+232.141576943" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.721941 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbm9l" podStartSLOduration=179.721923838 podStartE2EDuration="2m59.721923838s" podCreationTimestamp="2026-02-24 09:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.685378711 +0000 UTC m=+232.141901254" watchObservedRunningTime="2026-02-24 09:58:47.721923838 +0000 UTC m=+232.178446381" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.722659 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" podStartSLOduration=180.72265493 podStartE2EDuration="3m0.72265493s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:47.721416672 +0000 UTC m=+232.177939215" watchObservedRunningTime="2026-02-24 09:58:47.72265493 +0000 UTC m=+232.179177473" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.745938 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:47 crc kubenswrapper[4755]: E0224 09:58:47.746240 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:48.24622973 +0000 UTC m=+232.702752273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.846927 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:47 crc kubenswrapper[4755]: E0224 09:58:47.847201 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:48.34718659 +0000 UTC m=+232.803709133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.875775 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:47 crc kubenswrapper[4755]: I0224 09:58:47.948891 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:47 crc kubenswrapper[4755]: E0224 09:58:47.949216 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:48.449203171 +0000 UTC m=+232.905725704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.049947 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.050130 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:48.550100769 +0000 UTC m=+233.006623302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.050244 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.050606 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:48.550594224 +0000 UTC m=+233.007116767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.075128 4755 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-cltn2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.075463 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" podUID="387127d5-a12c-4dee-96d9-47f8983e0356" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.150921 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.151225 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:48.651211493 +0000 UTC m=+233.107734036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.173353 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" event={"ID":"62383901-634c-43e5-9177-4be83e12d514","Type":"ContainerStarted","Data":"0f1e28aa20631404fd742059d579b5213eb8413cd9955fff359c1dc8b6f017b2"} Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.174744 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kksxb" event={"ID":"4139cb8d-4ba9-4a77-8858-137229c972db","Type":"ContainerStarted","Data":"767c8f1225878c93a3dbe42cbcd95962282736869cc9141165bfd0e07b1e9cb4"} Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.177177 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lj8d5" event={"ID":"0d8a1f03-1c60-409f-85c0-5d2dad7e624c","Type":"ContainerStarted","Data":"8de40e58c828b8c10f08091bcc62eccf045224063f7d5bb702b19e04a488fce1"} Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.177219 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lj8d5" event={"ID":"0d8a1f03-1c60-409f-85c0-5d2dad7e624c","Type":"ContainerStarted","Data":"70783f192ceba079954f41011d00a51ec5cd2d4a9e1349a31bdcd8c6f84b136f"} Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.177825 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lj8d5" Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.179235 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" event={"ID":"2878eeb3-bbaa-4588-aa70-a6aa92e4c5a0","Type":"ContainerStarted","Data":"4c6069cf1a9e906409ccafee91e8a1fe60ba7a8f813835d0d830aa0b1b707660"} Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.181004 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l7frg" event={"ID":"8694e8ab-019b-41c1-bf7c-49d3ca10f213","Type":"ContainerStarted","Data":"6fbd3c09f4d0390d91f9ad6e68dc090e5a46f13f69761f1412841b580ce9a997"} Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.182779 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7qxdc" event={"ID":"a4f3eb31-20c7-4549-9df4-979045249f81","Type":"ContainerStarted","Data":"874ae23aa37b07a2ea7626df4789dcee95b21216cb30056aeb279ef460f1daaf"} Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.186085 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" event={"ID":"b7bace2b-7d6c-4332-b9d3-a1848568dfb0","Type":"ContainerStarted","Data":"a43c94f446a0076083fbc7020e0e84318b2ead814849a44936a7dcb0f965f5ed"} Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.188672 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" event={"ID":"68f685bc-87e6-44fb-97dd-798196ff677d","Type":"ContainerStarted","Data":"b264236c36659a78a1f603265b63e3dfbd5fca401a5627560967924ebc718e59"} Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.189856 4755 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tp8zp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.189913 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" podUID="e105e7e0-0046-47ca-8a73-c27e385a0301" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.191283 4755 patch_prober.go:28] interesting pod/downloads-7954f5f757-vhlbl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.191322 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vhlbl" podUID="d1c02e5f-aebc-4edb-9e99-f26fc84a32ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.219340 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dprd" Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.219876 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lj8d5" podStartSLOduration=8.219858328 podStartE2EDuration="8.219858328s" podCreationTimestamp="2026-02-24 09:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:48.21961648 +0000 UTC m=+232.676139023" watchObservedRunningTime="2026-02-24 09:58:48.219858328 +0000 UTC m=+232.676380871" Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.222544 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:48 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:48 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:48 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.222592 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.223595 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqqc4" Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.232858 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-w47s8" Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.251809 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.252132 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:48.75211998 +0000 UTC m=+233.208642523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.256255 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mhrgs" podStartSLOduration=181.25624624 podStartE2EDuration="3m1.25624624s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:48.253982219 +0000 UTC m=+232.710504762" watchObservedRunningTime="2026-02-24 09:58:48.25624624 +0000 UTC m=+232.712768783" Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.346961 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" podStartSLOduration=181.346940277 podStartE2EDuration="3m1.346940277s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:48.324757881 +0000 UTC m=+232.781280424" watchObservedRunningTime="2026-02-24 09:58:48.346940277 +0000 UTC m=+232.803462820" Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.353013 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.353203 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:48.853166832 +0000 UTC m=+233.309689375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.355247 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.356429 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:48.856412044 +0000 UTC m=+233.312934587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.456976 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.457266 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:48.957235109 +0000 UTC m=+233.413757652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.457447 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.457727 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:48.957714125 +0000 UTC m=+233.414236658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.558245 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.558556 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:49.05854218 +0000 UTC m=+233.515064713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.659214 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.659525 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:49.159512809 +0000 UTC m=+233.616035352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.760621 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.760853 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:49.26081954 +0000 UTC m=+233.717342093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.761013 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.762760 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:49.26274336 +0000 UTC m=+233.719265963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.812245 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cltn2" Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.861779 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.861877 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:49.3618626 +0000 UTC m=+233.818385143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.862186 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.862664 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:49.362656586 +0000 UTC m=+233.819179129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:48 crc kubenswrapper[4755]: I0224 09:58:48.964007 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:48 crc kubenswrapper[4755]: E0224 09:58:48.964613 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:49.464599266 +0000 UTC m=+233.921121809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.065561 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:49 crc kubenswrapper[4755]: E0224 09:58:49.066044 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:49.56602992 +0000 UTC m=+234.022552463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.144445 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d9fml"] Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.166039 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:49 crc kubenswrapper[4755]: E0224 09:58:49.166489 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:49.666471293 +0000 UTC m=+234.122993836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.190424 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw"] Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.190670 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" podUID="28cbd592-dd09-4070-b22e-f67f1d14dde2" containerName="route-controller-manager" containerID="cri-o://4fffd43237dfa9c1920f38df3d4609d68cc6595a22ba018bed599a3fed873a21" gracePeriod=30 Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.199616 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" event={"ID":"62383901-634c-43e5-9177-4be83e12d514","Type":"ContainerStarted","Data":"975cdbffaaa5606ba786797fc87dac2b5d7830caa8f5ed294498fc59207bc62d"} Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.199649 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" event={"ID":"62383901-634c-43e5-9177-4be83e12d514","Type":"ContainerStarted","Data":"169d64435d73427c6ef3f93638b2f65da88e2e0d0baaca3a2c3e97505b3b947c"} Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.200819 4755 patch_prober.go:28] interesting pod/downloads-7954f5f757-vhlbl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.200852 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vhlbl" podUID="d1c02e5f-aebc-4edb-9e99-f26fc84a32ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.200959 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" podUID="a6fd9421-d674-405c-a6c8-f25ff3c2f9f7" containerName="controller-manager" containerID="cri-o://0d0560804bdbee0a7e0ddd55cea8f59543e417000d7a40a39d1616d7ffda1de2" gracePeriod=30 Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.202055 4755 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tp8zp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.202096 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" podUID="e105e7e0-0046-47ca-8a73-c27e385a0301" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.218174 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:49 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:49 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:49 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.218217 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.268164 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:49 crc kubenswrapper[4755]: E0224 09:58:49.269607 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:49.76959169 +0000 UTC m=+234.226114233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.369047 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:49 crc kubenswrapper[4755]: E0224 09:58:49.369401 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:49.869384722 +0000 UTC m=+234.325907265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.409830 4755 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.470932 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:49 crc kubenswrapper[4755]: E0224 09:58:49.471357 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:58:49.971339393 +0000 UTC m=+234.427861936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7tncz" (UID: "275884c2-8599-4867-97aa-04d67ba35182") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.574552 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:49 crc kubenswrapper[4755]: E0224 09:58:49.574934 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:58:50.074918614 +0000 UTC m=+234.531441157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.595479 4755 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-24T09:58:49.409858833Z","Handler":null,"Name":""} Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.617893 4755 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.617931 4755 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.636252 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.679587 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gnrl\" (UniqueName: \"kubernetes.io/projected/28cbd592-dd09-4070-b22e-f67f1d14dde2-kube-api-access-2gnrl\") pod \"28cbd592-dd09-4070-b22e-f67f1d14dde2\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.679626 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28cbd592-dd09-4070-b22e-f67f1d14dde2-client-ca\") pod \"28cbd592-dd09-4070-b22e-f67f1d14dde2\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.679648 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28cbd592-dd09-4070-b22e-f67f1d14dde2-serving-cert\") pod \"28cbd592-dd09-4070-b22e-f67f1d14dde2\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.679676 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28cbd592-dd09-4070-b22e-f67f1d14dde2-config\") pod \"28cbd592-dd09-4070-b22e-f67f1d14dde2\" (UID: \"28cbd592-dd09-4070-b22e-f67f1d14dde2\") " Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.679773 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.680609 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28cbd592-dd09-4070-b22e-f67f1d14dde2-client-ca" (OuterVolumeSpecName: "client-ca") pod "28cbd592-dd09-4070-b22e-f67f1d14dde2" (UID: "28cbd592-dd09-4070-b22e-f67f1d14dde2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.681326 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28cbd592-dd09-4070-b22e-f67f1d14dde2-config" (OuterVolumeSpecName: "config") pod "28cbd592-dd09-4070-b22e-f67f1d14dde2" (UID: "28cbd592-dd09-4070-b22e-f67f1d14dde2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.683358 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.687792 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.687824 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.690440 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28cbd592-dd09-4070-b22e-f67f1d14dde2-kube-api-access-2gnrl" (OuterVolumeSpecName: "kube-api-access-2gnrl") pod "28cbd592-dd09-4070-b22e-f67f1d14dde2" (UID: "28cbd592-dd09-4070-b22e-f67f1d14dde2"). InnerVolumeSpecName "kube-api-access-2gnrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.697692 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28cbd592-dd09-4070-b22e-f67f1d14dde2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "28cbd592-dd09-4070-b22e-f67f1d14dde2" (UID: "28cbd592-dd09-4070-b22e-f67f1d14dde2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.734348 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7tncz\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.780820 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-proxy-ca-bundles\") pod \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.780947 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.781005 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-client-ca\") pod \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.781025 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw57c\" (UniqueName: \"kubernetes.io/projected/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-kube-api-access-xw57c\") pod \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.781046 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-serving-cert\") pod \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.781092 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-config\") pod \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\" (UID: \"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7\") " Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.781286 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gnrl\" (UniqueName: \"kubernetes.io/projected/28cbd592-dd09-4070-b22e-f67f1d14dde2-kube-api-access-2gnrl\") on node \"crc\" DevicePath \"\"" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.781302 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28cbd592-dd09-4070-b22e-f67f1d14dde2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.781312 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28cbd592-dd09-4070-b22e-f67f1d14dde2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.781323 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28cbd592-dd09-4070-b22e-f67f1d14dde2-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.782156 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-config" (OuterVolumeSpecName: "config") pod "a6fd9421-d674-405c-a6c8-f25ff3c2f9f7" (UID: "a6fd9421-d674-405c-a6c8-f25ff3c2f9f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.782430 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a6fd9421-d674-405c-a6c8-f25ff3c2f9f7" (UID: "a6fd9421-d674-405c-a6c8-f25ff3c2f9f7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.782727 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-client-ca" (OuterVolumeSpecName: "client-ca") pod "a6fd9421-d674-405c-a6c8-f25ff3c2f9f7" (UID: "a6fd9421-d674-405c-a6c8-f25ff3c2f9f7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.785638 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a6fd9421-d674-405c-a6c8-f25ff3c2f9f7" (UID: "a6fd9421-d674-405c-a6c8-f25ff3c2f9f7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.785712 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-kube-api-access-xw57c" (OuterVolumeSpecName: "kube-api-access-xw57c") pod "a6fd9421-d674-405c-a6c8-f25ff3c2f9f7" (UID: "a6fd9421-d674-405c-a6c8-f25ff3c2f9f7"). InnerVolumeSpecName "kube-api-access-xw57c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.787911 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.799375 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.882581 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw57c\" (UniqueName: \"kubernetes.io/projected/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-kube-api-access-xw57c\") on node \"crc\" DevicePath \"\"" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.882869 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.882881 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.882892 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.882900 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.955625 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mqjwv"] Feb 24 09:58:49 crc kubenswrapper[4755]: E0224 09:58:49.956544 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6fd9421-d674-405c-a6c8-f25ff3c2f9f7" containerName="controller-manager" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.956568 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6fd9421-d674-405c-a6c8-f25ff3c2f9f7" containerName="controller-manager" Feb 24 09:58:49 crc kubenswrapper[4755]: E0224 09:58:49.956585 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28cbd592-dd09-4070-b22e-f67f1d14dde2" containerName="route-controller-manager" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.956593 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="28cbd592-dd09-4070-b22e-f67f1d14dde2" containerName="route-controller-manager" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.956716 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6fd9421-d674-405c-a6c8-f25ff3c2f9f7" containerName="controller-manager" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.956738 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="28cbd592-dd09-4070-b22e-f67f1d14dde2" containerName="route-controller-manager" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.957695 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.963415 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.965714 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mqjwv"] Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.985042 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lkvm\" (UniqueName: \"kubernetes.io/projected/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-kube-api-access-5lkvm\") pod \"community-operators-mqjwv\" (UID: \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\") " pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.985215 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-catalog-content\") pod \"community-operators-mqjwv\" (UID: \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\") " pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:58:49 crc kubenswrapper[4755]: I0224 09:58:49.985316 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-utilities\") pod \"community-operators-mqjwv\" (UID: \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\") " pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.003819 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41148: no serving certificate available for the kubelet" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.018739 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7tncz"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.086744 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-catalog-content\") pod \"community-operators-mqjwv\" (UID: \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\") " pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.086802 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-utilities\") pod \"community-operators-mqjwv\" (UID: \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\") " pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.086895 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lkvm\" (UniqueName: \"kubernetes.io/projected/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-kube-api-access-5lkvm\") pod \"community-operators-mqjwv\" (UID: \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\") " pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.087566 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-catalog-content\") pod \"community-operators-mqjwv\" (UID: \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\") " pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.087699 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-utilities\") pod \"community-operators-mqjwv\" (UID: \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\") " pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.104520 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lkvm\" (UniqueName: \"kubernetes.io/projected/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-kube-api-access-5lkvm\") pod \"community-operators-mqjwv\" (UID: \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\") " pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.154253 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bkgvw"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.155929 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.159006 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.165199 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bkgvw"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.187812 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-utilities\") pod \"certified-operators-bkgvw\" (UID: \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\") " pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.187883 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-catalog-content\") pod \"certified-operators-bkgvw\" (UID: \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\") " pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.187925 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkgxx\" (UniqueName: \"kubernetes.io/projected/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-kube-api-access-hkgxx\") pod \"certified-operators-bkgvw\" (UID: \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\") " pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.206096 4755 generic.go:334] "Generic (PLEG): container finished" podID="279eff18-be0a-4f07-81fc-69fef8faac6c" containerID="79564065fbe3403c26cf582934dc2e1bcb7d67b6f7c12c00cba81f1397848543" exitCode=0 Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.206170 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" event={"ID":"279eff18-be0a-4f07-81fc-69fef8faac6c","Type":"ContainerDied","Data":"79564065fbe3403c26cf582934dc2e1bcb7d67b6f7c12c00cba81f1397848543"} Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.208212 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" event={"ID":"275884c2-8599-4867-97aa-04d67ba35182","Type":"ContainerStarted","Data":"d8ca99e0460dc7e8f287447cffe7db5d085357becb24a02fc148cca2f8a1ab95"} Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.208282 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" event={"ID":"275884c2-8599-4867-97aa-04d67ba35182","Type":"ContainerStarted","Data":"72b1f54ca806d56403ee67bd02409c87980e9f4004ae7c74c7b57daf736b643f"} Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.208337 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.210033 4755 generic.go:334] "Generic (PLEG): container finished" podID="a6fd9421-d674-405c-a6c8-f25ff3c2f9f7" containerID="0d0560804bdbee0a7e0ddd55cea8f59543e417000d7a40a39d1616d7ffda1de2" exitCode=0 Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.210099 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.210139 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" event={"ID":"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7","Type":"ContainerDied","Data":"0d0560804bdbee0a7e0ddd55cea8f59543e417000d7a40a39d1616d7ffda1de2"} Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.210167 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d9fml" event={"ID":"a6fd9421-d674-405c-a6c8-f25ff3c2f9f7","Type":"ContainerDied","Data":"0f28b50851621089380b290d71906301e0b1c815868a2e3d9b7b09ee5f446140"} Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.210190 4755 scope.go:117] "RemoveContainer" containerID="0d0560804bdbee0a7e0ddd55cea8f59543e417000d7a40a39d1616d7ffda1de2" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.212481 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" event={"ID":"62383901-634c-43e5-9177-4be83e12d514","Type":"ContainerStarted","Data":"52bb6a9997f621c9a9bd27169e8677968de93b439397b5e852a9f7b5809ad482"} Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.214383 4755 generic.go:334] "Generic (PLEG): container finished" podID="28cbd592-dd09-4070-b22e-f67f1d14dde2" containerID="4fffd43237dfa9c1920f38df3d4609d68cc6595a22ba018bed599a3fed873a21" exitCode=0 Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.214855 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.219028 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" event={"ID":"28cbd592-dd09-4070-b22e-f67f1d14dde2","Type":"ContainerDied","Data":"4fffd43237dfa9c1920f38df3d4609d68cc6595a22ba018bed599a3fed873a21"} Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.219093 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw" event={"ID":"28cbd592-dd09-4070-b22e-f67f1d14dde2","Type":"ContainerDied","Data":"52f95a710214b4ef44d1cc38e236f836be71f14a3f032d0af570d02b3d629b9c"} Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.229226 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:50 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:50 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:50 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.229278 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.229976 4755 scope.go:117] "RemoveContainer" containerID="0d0560804bdbee0a7e0ddd55cea8f59543e417000d7a40a39d1616d7ffda1de2" Feb 24 09:58:50 crc kubenswrapper[4755]: E0224 09:58:50.230832 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d0560804bdbee0a7e0ddd55cea8f59543e417000d7a40a39d1616d7ffda1de2\": container with ID starting with 0d0560804bdbee0a7e0ddd55cea8f59543e417000d7a40a39d1616d7ffda1de2 not found: ID does not exist" containerID="0d0560804bdbee0a7e0ddd55cea8f59543e417000d7a40a39d1616d7ffda1de2" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.230871 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d0560804bdbee0a7e0ddd55cea8f59543e417000d7a40a39d1616d7ffda1de2"} err="failed to get container status \"0d0560804bdbee0a7e0ddd55cea8f59543e417000d7a40a39d1616d7ffda1de2\": rpc error: code = NotFound desc = could not find container \"0d0560804bdbee0a7e0ddd55cea8f59543e417000d7a40a39d1616d7ffda1de2\": container with ID starting with 0d0560804bdbee0a7e0ddd55cea8f59543e417000d7a40a39d1616d7ffda1de2 not found: ID does not exist" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.230895 4755 scope.go:117] "RemoveContainer" containerID="4fffd43237dfa9c1920f38df3d4609d68cc6595a22ba018bed599a3fed873a21" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.250883 4755 scope.go:117] "RemoveContainer" containerID="4fffd43237dfa9c1920f38df3d4609d68cc6595a22ba018bed599a3fed873a21" Feb 24 09:58:50 crc kubenswrapper[4755]: E0224 09:58:50.251485 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fffd43237dfa9c1920f38df3d4609d68cc6595a22ba018bed599a3fed873a21\": container with ID starting with 4fffd43237dfa9c1920f38df3d4609d68cc6595a22ba018bed599a3fed873a21 not found: ID does not exist" containerID="4fffd43237dfa9c1920f38df3d4609d68cc6595a22ba018bed599a3fed873a21" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.251533 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fffd43237dfa9c1920f38df3d4609d68cc6595a22ba018bed599a3fed873a21"} err="failed to get container status \"4fffd43237dfa9c1920f38df3d4609d68cc6595a22ba018bed599a3fed873a21\": rpc error: code = NotFound desc = could not find container \"4fffd43237dfa9c1920f38df3d4609d68cc6595a22ba018bed599a3fed873a21\": container with ID starting with 4fffd43237dfa9c1920f38df3d4609d68cc6595a22ba018bed599a3fed873a21 not found: ID does not exist" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.281091 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.284994 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" podStartSLOduration=183.284972983 podStartE2EDuration="3m3.284972983s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:50.283670603 +0000 UTC m=+234.740193146" watchObservedRunningTime="2026-02-24 09:58:50.284972983 +0000 UTC m=+234.741495526" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.287258 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-zmmhb" podStartSLOduration=10.287247274 podStartE2EDuration="10.287247274s" podCreationTimestamp="2026-02-24 09:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:50.262990363 +0000 UTC m=+234.719512906" watchObservedRunningTime="2026-02-24 09:58:50.287247274 +0000 UTC m=+234.743769817" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.289511 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-utilities\") pod \"certified-operators-bkgvw\" (UID: \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\") " pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.289693 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-catalog-content\") pod \"certified-operators-bkgvw\" (UID: \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\") " pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.289829 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkgxx\" (UniqueName: \"kubernetes.io/projected/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-kube-api-access-hkgxx\") pod \"certified-operators-bkgvw\" (UID: \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\") " pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.291486 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-utilities\") pod \"certified-operators-bkgvw\" (UID: \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\") " pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.291710 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-catalog-content\") pod \"certified-operators-bkgvw\" (UID: \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\") " pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.302173 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.309676 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2g2cw"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.314754 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkgxx\" (UniqueName: \"kubernetes.io/projected/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-kube-api-access-hkgxx\") pod \"certified-operators-bkgvw\" (UID: \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\") " pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.326817 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28cbd592-dd09-4070-b22e-f67f1d14dde2" path="/var/lib/kubelet/pods/28cbd592-dd09-4070-b22e-f67f1d14dde2/volumes" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.327739 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.328141 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d9fml"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.328172 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d9fml"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.377360 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x69lf"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.397026 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.400984 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x69lf"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.449428 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.450561 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.454137 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.454862 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.455309 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.455658 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.456035 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.456214 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.456320 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.460381 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.460381 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.460561 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.461666 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.462401 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.462742 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.462949 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.463075 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.463352 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.464425 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.474580 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.492907 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d807c0b7-7410-475a-bcde-00d34547ae06-config\") pod \"route-controller-manager-76f8646bc4-spfsz\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.492959 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmr56\" (UniqueName: \"kubernetes.io/projected/d61d7a2b-f731-4982-bab4-8c6db2c8c963-kube-api-access-vmr56\") pod \"community-operators-x69lf\" (UID: \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\") " pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.492986 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24f886fe-4911-48f1-8aaa-46e0ddabb144-serving-cert\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.493008 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61d7a2b-f731-4982-bab4-8c6db2c8c963-catalog-content\") pod \"community-operators-x69lf\" (UID: \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\") " pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.493045 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-proxy-ca-bundles\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.493092 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r96sx\" (UniqueName: \"kubernetes.io/projected/d807c0b7-7410-475a-bcde-00d34547ae06-kube-api-access-r96sx\") pod \"route-controller-manager-76f8646bc4-spfsz\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.493125 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-client-ca\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.493158 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-config\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.493177 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d807c0b7-7410-475a-bcde-00d34547ae06-serving-cert\") pod \"route-controller-manager-76f8646bc4-spfsz\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.493216 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61d7a2b-f731-4982-bab4-8c6db2c8c963-utilities\") pod \"community-operators-x69lf\" (UID: \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\") " pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.493243 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjhh7\" (UniqueName: \"kubernetes.io/projected/24f886fe-4911-48f1-8aaa-46e0ddabb144-kube-api-access-rjhh7\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.493276 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d807c0b7-7410-475a-bcde-00d34547ae06-client-ca\") pod \"route-controller-manager-76f8646bc4-spfsz\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.551142 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mqjwv"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.553658 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2tztw"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.554586 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:58:50 crc kubenswrapper[4755]: W0224 09:58:50.555395 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4344c16_3181_42d3_9d94_6cccd3fe8cc0.slice/crio-9eee4239dd13b83a9950cf46f606653a118c2c756e910f2cd0f851897bd6da1e WatchSource:0}: Error finding container 9eee4239dd13b83a9950cf46f606653a118c2c756e910f2cd0f851897bd6da1e: Status 404 returned error can't find the container with id 9eee4239dd13b83a9950cf46f606653a118c2c756e910f2cd0f851897bd6da1e Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.591898 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2tztw"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594568 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-client-ca\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594637 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-config\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594657 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d807c0b7-7410-475a-bcde-00d34547ae06-serving-cert\") pod \"route-controller-manager-76f8646bc4-spfsz\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594694 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61d7a2b-f731-4982-bab4-8c6db2c8c963-utilities\") pod \"community-operators-x69lf\" (UID: \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\") " pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594718 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjhh7\" (UniqueName: \"kubernetes.io/projected/24f886fe-4911-48f1-8aaa-46e0ddabb144-kube-api-access-rjhh7\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594739 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d807c0b7-7410-475a-bcde-00d34547ae06-client-ca\") pod \"route-controller-manager-76f8646bc4-spfsz\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594787 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d807c0b7-7410-475a-bcde-00d34547ae06-config\") pod \"route-controller-manager-76f8646bc4-spfsz\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594810 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c94c8e-698e-4c14-ae46-16f03e666f27-catalog-content\") pod \"certified-operators-2tztw\" (UID: \"d7c94c8e-698e-4c14-ae46-16f03e666f27\") " pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594831 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmr56\" (UniqueName: \"kubernetes.io/projected/d61d7a2b-f731-4982-bab4-8c6db2c8c963-kube-api-access-vmr56\") pod \"community-operators-x69lf\" (UID: \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\") " pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594848 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24f886fe-4911-48f1-8aaa-46e0ddabb144-serving-cert\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594865 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61d7a2b-f731-4982-bab4-8c6db2c8c963-catalog-content\") pod \"community-operators-x69lf\" (UID: \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\") " pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594885 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-proxy-ca-bundles\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594902 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c94c8e-698e-4c14-ae46-16f03e666f27-utilities\") pod \"certified-operators-2tztw\" (UID: \"d7c94c8e-698e-4c14-ae46-16f03e666f27\") " pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594921 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r96sx\" (UniqueName: \"kubernetes.io/projected/d807c0b7-7410-475a-bcde-00d34547ae06-kube-api-access-r96sx\") pod \"route-controller-manager-76f8646bc4-spfsz\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.594941 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhccj\" (UniqueName: \"kubernetes.io/projected/d7c94c8e-698e-4c14-ae46-16f03e666f27-kube-api-access-fhccj\") pod \"certified-operators-2tztw\" (UID: \"d7c94c8e-698e-4c14-ae46-16f03e666f27\") " pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.595724 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-client-ca\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.596844 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-config\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.599498 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61d7a2b-f731-4982-bab4-8c6db2c8c963-utilities\") pod \"community-operators-x69lf\" (UID: \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\") " pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.599948 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61d7a2b-f731-4982-bab4-8c6db2c8c963-catalog-content\") pod \"community-operators-x69lf\" (UID: \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\") " pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.600386 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d807c0b7-7410-475a-bcde-00d34547ae06-client-ca\") pod \"route-controller-manager-76f8646bc4-spfsz\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.600823 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d807c0b7-7410-475a-bcde-00d34547ae06-config\") pod \"route-controller-manager-76f8646bc4-spfsz\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.611589 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-proxy-ca-bundles\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.612650 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24f886fe-4911-48f1-8aaa-46e0ddabb144-serving-cert\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.620870 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d807c0b7-7410-475a-bcde-00d34547ae06-serving-cert\") pod \"route-controller-manager-76f8646bc4-spfsz\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.629800 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmr56\" (UniqueName: \"kubernetes.io/projected/d61d7a2b-f731-4982-bab4-8c6db2c8c963-kube-api-access-vmr56\") pod \"community-operators-x69lf\" (UID: \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\") " pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.655333 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjhh7\" (UniqueName: \"kubernetes.io/projected/24f886fe-4911-48f1-8aaa-46e0ddabb144-kube-api-access-rjhh7\") pod \"controller-manager-6f6cbdcc5d-nsm7j\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.680231 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r96sx\" (UniqueName: \"kubernetes.io/projected/d807c0b7-7410-475a-bcde-00d34547ae06-kube-api-access-r96sx\") pod \"route-controller-manager-76f8646bc4-spfsz\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.696799 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c94c8e-698e-4c14-ae46-16f03e666f27-utilities\") pod \"certified-operators-2tztw\" (UID: \"d7c94c8e-698e-4c14-ae46-16f03e666f27\") " pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.696863 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhccj\" (UniqueName: \"kubernetes.io/projected/d7c94c8e-698e-4c14-ae46-16f03e666f27-kube-api-access-fhccj\") pod \"certified-operators-2tztw\" (UID: \"d7c94c8e-698e-4c14-ae46-16f03e666f27\") " pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.696953 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c94c8e-698e-4c14-ae46-16f03e666f27-catalog-content\") pod \"certified-operators-2tztw\" (UID: \"d7c94c8e-698e-4c14-ae46-16f03e666f27\") " pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.697376 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c94c8e-698e-4c14-ae46-16f03e666f27-utilities\") pod \"certified-operators-2tztw\" (UID: \"d7c94c8e-698e-4c14-ae46-16f03e666f27\") " pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.697501 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c94c8e-698e-4c14-ae46-16f03e666f27-catalog-content\") pod \"certified-operators-2tztw\" (UID: \"d7c94c8e-698e-4c14-ae46-16f03e666f27\") " pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.719332 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhccj\" (UniqueName: \"kubernetes.io/projected/d7c94c8e-698e-4c14-ae46-16f03e666f27-kube-api-access-fhccj\") pod \"certified-operators-2tztw\" (UID: \"d7c94c8e-698e-4c14-ae46-16f03e666f27\") " pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.731986 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.766719 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bkgvw"] Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.774436 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:50 crc kubenswrapper[4755]: W0224 09:58:50.776076 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47b5e8fc_e79e_4cd2_906b_0d8116e4d608.slice/crio-0323e89ad66aa6bacfaa80a8c98fd295c001b29d03f78d386488e1e3616c1656 WatchSource:0}: Error finding container 0323e89ad66aa6bacfaa80a8c98fd295c001b29d03f78d386488e1e3616c1656: Status 404 returned error can't find the container with id 0323e89ad66aa6bacfaa80a8c98fd295c001b29d03f78d386488e1e3616c1656 Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.783529 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.893886 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:58:50 crc kubenswrapper[4755]: I0224 09:58:50.964898 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x69lf"] Feb 24 09:58:51 crc kubenswrapper[4755]: W0224 09:58:51.031559 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd61d7a2b_f731_4982_bab4_8c6db2c8c963.slice/crio-55024261f62a1e1ecd9980b56a1a21478076919a5afd52fcdc9a177ae8589c07 WatchSource:0}: Error finding container 55024261f62a1e1ecd9980b56a1a21478076919a5afd52fcdc9a177ae8589c07: Status 404 returned error can't find the container with id 55024261f62a1e1ecd9980b56a1a21478076919a5afd52fcdc9a177ae8589c07 Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.039871 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j"] Feb 24 09:58:51 crc kubenswrapper[4755]: W0224 09:58:51.049590 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24f886fe_4911_48f1_8aaa_46e0ddabb144.slice/crio-51cf0fd19d85380a570707e792596284e788aed18799a5aaa5517ae6f5837c6b WatchSource:0}: Error finding container 51cf0fd19d85380a570707e792596284e788aed18799a5aaa5517ae6f5837c6b: Status 404 returned error can't find the container with id 51cf0fd19d85380a570707e792596284e788aed18799a5aaa5517ae6f5837c6b Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.099535 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz"] Feb 24 09:58:51 crc kubenswrapper[4755]: W0224 09:58:51.111162 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd807c0b7_7410_475a_bcde_00d34547ae06.slice/crio-656d712876af17c46f63da2e620865769ad98ec06bc2a19fae64089f80c97774 WatchSource:0}: Error finding container 656d712876af17c46f63da2e620865769ad98ec06bc2a19fae64089f80c97774: Status 404 returned error can't find the container with id 656d712876af17c46f63da2e620865769ad98ec06bc2a19fae64089f80c97774 Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.150800 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2tztw"] Feb 24 09:58:51 crc kubenswrapper[4755]: W0224 09:58:51.162783 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7c94c8e_698e_4c14_ae46_16f03e666f27.slice/crio-22fdb7e186e52f8f476ae63fec4c92e4333ddc9a5669d0189ea1ea31bee4f3f3 WatchSource:0}: Error finding container 22fdb7e186e52f8f476ae63fec4c92e4333ddc9a5669d0189ea1ea31bee4f3f3: Status 404 returned error can't find the container with id 22fdb7e186e52f8f476ae63fec4c92e4333ddc9a5669d0189ea1ea31bee4f3f3 Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.220729 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:51 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:51 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:51 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.220776 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.221317 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" event={"ID":"d807c0b7-7410-475a-bcde-00d34547ae06","Type":"ContainerStarted","Data":"656d712876af17c46f63da2e620865769ad98ec06bc2a19fae64089f80c97774"} Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.223365 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" event={"ID":"24f886fe-4911-48f1-8aaa-46e0ddabb144","Type":"ContainerStarted","Data":"fa2dc332da46b7c32a5c204abbadba682ffb9c91322d579e74633c01de865508"} Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.223398 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" event={"ID":"24f886fe-4911-48f1-8aaa-46e0ddabb144","Type":"ContainerStarted","Data":"51cf0fd19d85380a570707e792596284e788aed18799a5aaa5517ae6f5837c6b"} Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.224177 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.225054 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" containerID="02340fd8a8d5b60e58f8085c7d639ae63f0352ad0eb303b02fc4362fdcf072b8" exitCode=0 Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.225114 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqjwv" event={"ID":"f4344c16-3181-42d3-9d94-6cccd3fe8cc0","Type":"ContainerDied","Data":"02340fd8a8d5b60e58f8085c7d639ae63f0352ad0eb303b02fc4362fdcf072b8"} Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.225129 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqjwv" event={"ID":"f4344c16-3181-42d3-9d94-6cccd3fe8cc0","Type":"ContainerStarted","Data":"9eee4239dd13b83a9950cf46f606653a118c2c756e910f2cd0f851897bd6da1e"} Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.226841 4755 patch_prober.go:28] interesting pod/controller-manager-6f6cbdcc5d-nsm7j container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.47:8443/healthz\": dial tcp 10.217.0.47:8443: connect: connection refused" start-of-body= Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.226869 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" podUID="24f886fe-4911-48f1-8aaa-46e0ddabb144" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.47:8443/healthz\": dial tcp 10.217.0.47:8443: connect: connection refused" Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.226996 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.230011 4755 generic.go:334] "Generic (PLEG): container finished" podID="47b5e8fc-e79e-4cd2-906b-0d8116e4d608" containerID="f093e7b54ec5a4516ebf73dacedaf9e1c0037e65f3f6f40c11d830a08e7a839c" exitCode=0 Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.230112 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkgvw" event={"ID":"47b5e8fc-e79e-4cd2-906b-0d8116e4d608","Type":"ContainerDied","Data":"f093e7b54ec5a4516ebf73dacedaf9e1c0037e65f3f6f40c11d830a08e7a839c"} Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.230366 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkgvw" event={"ID":"47b5e8fc-e79e-4cd2-906b-0d8116e4d608","Type":"ContainerStarted","Data":"0323e89ad66aa6bacfaa80a8c98fd295c001b29d03f78d386488e1e3616c1656"} Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.246838 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tztw" event={"ID":"d7c94c8e-698e-4c14-ae46-16f03e666f27","Type":"ContainerStarted","Data":"22fdb7e186e52f8f476ae63fec4c92e4333ddc9a5669d0189ea1ea31bee4f3f3"} Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.249506 4755 generic.go:334] "Generic (PLEG): container finished" podID="d61d7a2b-f731-4982-bab4-8c6db2c8c963" containerID="3d7d68159aa6437c8a9ed9e53dbca395f43feb7a919b6d09bb422e1a4114c7a5" exitCode=0 Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.249606 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x69lf" event={"ID":"d61d7a2b-f731-4982-bab4-8c6db2c8c963","Type":"ContainerDied","Data":"3d7d68159aa6437c8a9ed9e53dbca395f43feb7a919b6d09bb422e1a4114c7a5"} Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.249632 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x69lf" event={"ID":"d61d7a2b-f731-4982-bab4-8c6db2c8c963","Type":"ContainerStarted","Data":"55024261f62a1e1ecd9980b56a1a21478076919a5afd52fcdc9a177ae8589c07"} Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.257025 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" podStartSLOduration=2.257008946 podStartE2EDuration="2.257008946s" podCreationTimestamp="2026-02-24 09:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:51.241338405 +0000 UTC m=+235.697860948" watchObservedRunningTime="2026-02-24 09:58:51.257008946 +0000 UTC m=+235.713531479" Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.458280 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.611296 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcjxs\" (UniqueName: \"kubernetes.io/projected/279eff18-be0a-4f07-81fc-69fef8faac6c-kube-api-access-gcjxs\") pod \"279eff18-be0a-4f07-81fc-69fef8faac6c\" (UID: \"279eff18-be0a-4f07-81fc-69fef8faac6c\") " Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.611342 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/279eff18-be0a-4f07-81fc-69fef8faac6c-secret-volume\") pod \"279eff18-be0a-4f07-81fc-69fef8faac6c\" (UID: \"279eff18-be0a-4f07-81fc-69fef8faac6c\") " Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.611375 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/279eff18-be0a-4f07-81fc-69fef8faac6c-config-volume\") pod \"279eff18-be0a-4f07-81fc-69fef8faac6c\" (UID: \"279eff18-be0a-4f07-81fc-69fef8faac6c\") " Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.612123 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/279eff18-be0a-4f07-81fc-69fef8faac6c-config-volume" (OuterVolumeSpecName: "config-volume") pod "279eff18-be0a-4f07-81fc-69fef8faac6c" (UID: "279eff18-be0a-4f07-81fc-69fef8faac6c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.616593 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/279eff18-be0a-4f07-81fc-69fef8faac6c-kube-api-access-gcjxs" (OuterVolumeSpecName: "kube-api-access-gcjxs") pod "279eff18-be0a-4f07-81fc-69fef8faac6c" (UID: "279eff18-be0a-4f07-81fc-69fef8faac6c"). InnerVolumeSpecName "kube-api-access-gcjxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.616784 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/279eff18-be0a-4f07-81fc-69fef8faac6c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "279eff18-be0a-4f07-81fc-69fef8faac6c" (UID: "279eff18-be0a-4f07-81fc-69fef8faac6c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.695846 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.695921 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.712258 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcjxs\" (UniqueName: \"kubernetes.io/projected/279eff18-be0a-4f07-81fc-69fef8faac6c-kube-api-access-gcjxs\") on node \"crc\" DevicePath \"\"" Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.712291 4755 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/279eff18-be0a-4f07-81fc-69fef8faac6c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.712300 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/279eff18-be0a-4f07-81fc-69fef8faac6c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:58:51 crc kubenswrapper[4755]: I0224 09:58:51.821921 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.081566 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 09:58:52 crc kubenswrapper[4755]: E0224 09:58:52.083164 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279eff18-be0a-4f07-81fc-69fef8faac6c" containerName="collect-profiles" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.083364 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="279eff18-be0a-4f07-81fc-69fef8faac6c" containerName="collect-profiles" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.083599 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="279eff18-be0a-4f07-81fc-69fef8faac6c" containerName="collect-profiles" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.084790 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.089999 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.089995 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.093350 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.126865 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e686c093-6253-42ec-a1a6-7a8553b42533-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e686c093-6253-42ec-a1a6-7a8553b42533\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.127093 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e686c093-6253-42ec-a1a6-7a8553b42533-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e686c093-6253-42ec-a1a6-7a8553b42533\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.153643 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6gt9z"] Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.155323 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.158795 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.161876 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gt9z"] Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.219013 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:52 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:52 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:52 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.219094 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.228369 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e686c093-6253-42ec-a1a6-7a8553b42533-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e686c093-6253-42ec-a1a6-7a8553b42533\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.228482 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e686c093-6253-42ec-a1a6-7a8553b42533-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e686c093-6253-42ec-a1a6-7a8553b42533\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.228549 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e686c093-6253-42ec-a1a6-7a8553b42533-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e686c093-6253-42ec-a1a6-7a8553b42533\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.250769 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e686c093-6253-42ec-a1a6-7a8553b42533-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e686c093-6253-42ec-a1a6-7a8553b42533\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.279943 4755 generic.go:334] "Generic (PLEG): container finished" podID="d7c94c8e-698e-4c14-ae46-16f03e666f27" containerID="b69a403d03bb0ab7f387d7d7eb3663a58165f0edf243ff9ffe4a118789e4b19e" exitCode=0 Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.280054 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tztw" event={"ID":"d7c94c8e-698e-4c14-ae46-16f03e666f27","Type":"ContainerDied","Data":"b69a403d03bb0ab7f387d7d7eb3663a58165f0edf243ff9ffe4a118789e4b19e"} Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.283894 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" event={"ID":"d807c0b7-7410-475a-bcde-00d34547ae06","Type":"ContainerStarted","Data":"c0bb29480979bafb98473cab708b10c3a5973df95d1d22eb8a6ab5d8c7db4938"} Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.284995 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.289462 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.290330 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449" event={"ID":"279eff18-be0a-4f07-81fc-69fef8faac6c","Type":"ContainerDied","Data":"04b28e9ecb34f28164c35b74bdb697cb4014342fa246003c14180cd1b3af31bb"} Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.290381 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04b28e9ecb34f28164c35b74bdb697cb4014342fa246003c14180cd1b3af31bb" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.291564 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.299543 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.316676 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" podStartSLOduration=3.316659839 podStartE2EDuration="3.316659839s" podCreationTimestamp="2026-02-24 09:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:58:52.313049686 +0000 UTC m=+236.769572229" watchObservedRunningTime="2026-02-24 09:58:52.316659839 +0000 UTC m=+236.773182382" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.329752 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-utilities\") pod \"redhat-marketplace-6gt9z\" (UID: \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\") " pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.329786 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-catalog-content\") pod \"redhat-marketplace-6gt9z\" (UID: \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\") " pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.329837 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzpcl\" (UniqueName: \"kubernetes.io/projected/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-kube-api-access-qzpcl\") pod \"redhat-marketplace-6gt9z\" (UID: \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\") " pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.344143 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6fd9421-d674-405c-a6c8-f25ff3c2f9f7" path="/var/lib/kubelet/pods/a6fd9421-d674-405c-a6c8-f25ff3c2f9f7/volumes" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.418312 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.435497 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-utilities\") pod \"redhat-marketplace-6gt9z\" (UID: \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\") " pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.435550 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-catalog-content\") pod \"redhat-marketplace-6gt9z\" (UID: \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\") " pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.435637 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzpcl\" (UniqueName: \"kubernetes.io/projected/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-kube-api-access-qzpcl\") pod \"redhat-marketplace-6gt9z\" (UID: \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\") " pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.436881 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-utilities\") pod \"redhat-marketplace-6gt9z\" (UID: \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\") " pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.437312 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-catalog-content\") pod \"redhat-marketplace-6gt9z\" (UID: \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\") " pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.470907 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzpcl\" (UniqueName: \"kubernetes.io/projected/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-kube-api-access-qzpcl\") pod \"redhat-marketplace-6gt9z\" (UID: \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\") " pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.477054 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.564499 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pjxsm"] Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.569783 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.575793 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjxsm"] Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.640141 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-catalog-content\") pod \"redhat-marketplace-pjxsm\" (UID: \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\") " pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.640225 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-utilities\") pod \"redhat-marketplace-pjxsm\" (UID: \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\") " pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.640326 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6kkn\" (UniqueName: \"kubernetes.io/projected/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-kube-api-access-t6kkn\") pod \"redhat-marketplace-pjxsm\" (UID: \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\") " pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.741432 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-catalog-content\") pod \"redhat-marketplace-pjxsm\" (UID: \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\") " pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.741771 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-utilities\") pod \"redhat-marketplace-pjxsm\" (UID: \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\") " pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.741835 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6kkn\" (UniqueName: \"kubernetes.io/projected/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-kube-api-access-t6kkn\") pod \"redhat-marketplace-pjxsm\" (UID: \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\") " pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.742174 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-catalog-content\") pod \"redhat-marketplace-pjxsm\" (UID: \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\") " pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.742401 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-utilities\") pod \"redhat-marketplace-pjxsm\" (UID: \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\") " pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.764476 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.764645 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.768817 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6kkn\" (UniqueName: \"kubernetes.io/projected/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-kube-api-access-t6kkn\") pod \"redhat-marketplace-pjxsm\" (UID: \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\") " pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.783341 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.941086 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gt9z"] Feb 24 09:58:52 crc kubenswrapper[4755]: W0224 09:58:52.956546 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a07bcaf_0f1c_478d_9ba5_cdb4a1e912d1.slice/crio-b427d75a5ecc2958d521274fd5adeda692ac9c95afba17ac7482f5fe720ec6c7 WatchSource:0}: Error finding container b427d75a5ecc2958d521274fd5adeda692ac9c95afba17ac7482f5fe720ec6c7: Status 404 returned error can't find the container with id b427d75a5ecc2958d521274fd5adeda692ac9c95afba17ac7482f5fe720ec6c7 Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.978437 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.993475 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.994280 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.998398 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 09:58:52 crc kubenswrapper[4755]: I0224 09:58:52.998658 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.000116 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.016800 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 09:58:53 crc kubenswrapper[4755]: W0224 09:58:53.039293 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode686c093_6253_42ec_a1a6_7a8553b42533.slice/crio-bb35edf08011f22ff1e1ee2252a5bf535f91a3efbf3fe422536cf34cc3457605 WatchSource:0}: Error finding container bb35edf08011f22ff1e1ee2252a5bf535f91a3efbf3fe422536cf34cc3457605: Status 404 returned error can't find the container with id bb35edf08011f22ff1e1ee2252a5bf535f91a3efbf3fe422536cf34cc3457605 Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.045783 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0987648c-428f-426a-9b0f-90087a76e6ed-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0987648c-428f-426a-9b0f-90087a76e6ed\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.045879 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0987648c-428f-426a-9b0f-90087a76e6ed-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0987648c-428f-426a-9b0f-90087a76e6ed\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.147118 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0987648c-428f-426a-9b0f-90087a76e6ed-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0987648c-428f-426a-9b0f-90087a76e6ed\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.147467 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0987648c-428f-426a-9b0f-90087a76e6ed-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0987648c-428f-426a-9b0f-90087a76e6ed\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.147552 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0987648c-428f-426a-9b0f-90087a76e6ed-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0987648c-428f-426a-9b0f-90087a76e6ed\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.154794 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2gqk6"] Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.156900 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.159344 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.161318 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2gqk6"] Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.166866 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0987648c-428f-426a-9b0f-90087a76e6ed-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0987648c-428f-426a-9b0f-90087a76e6ed\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.178079 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.178453 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.184150 4755 patch_prober.go:28] interesting pod/console-f9d7485db-fqscc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.184308 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fqscc" podUID="c15ecede-c840-4fc8-bc38-a970796c9517" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.215360 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.218692 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:53 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:53 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:53 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.218760 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.249255 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lktzd\" (UniqueName: \"kubernetes.io/projected/9fce8bb3-2fc9-496a-b0c0-873427d27571-kube-api-access-lktzd\") pod \"redhat-operators-2gqk6\" (UID: \"9fce8bb3-2fc9-496a-b0c0-873427d27571\") " pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.249307 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fce8bb3-2fc9-496a-b0c0-873427d27571-utilities\") pod \"redhat-operators-2gqk6\" (UID: \"9fce8bb3-2fc9-496a-b0c0-873427d27571\") " pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.249372 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fce8bb3-2fc9-496a-b0c0-873427d27571-catalog-content\") pod \"redhat-operators-2gqk6\" (UID: \"9fce8bb3-2fc9-496a-b0c0-873427d27571\") " pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.309091 4755 generic.go:334] "Generic (PLEG): container finished" podID="2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" containerID="289b398aff338fd93042c728c523826c1668f95bbf76b80b3a8bf255a18738ba" exitCode=0 Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.309172 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gt9z" event={"ID":"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1","Type":"ContainerDied","Data":"289b398aff338fd93042c728c523826c1668f95bbf76b80b3a8bf255a18738ba"} Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.309212 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gt9z" event={"ID":"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1","Type":"ContainerStarted","Data":"b427d75a5ecc2958d521274fd5adeda692ac9c95afba17ac7482f5fe720ec6c7"} Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.318604 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e686c093-6253-42ec-a1a6-7a8553b42533","Type":"ContainerStarted","Data":"bb35edf08011f22ff1e1ee2252a5bf535f91a3efbf3fe422536cf34cc3457605"} Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.323414 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xpvsw" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.351463 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lktzd\" (UniqueName: \"kubernetes.io/projected/9fce8bb3-2fc9-496a-b0c0-873427d27571-kube-api-access-lktzd\") pod \"redhat-operators-2gqk6\" (UID: \"9fce8bb3-2fc9-496a-b0c0-873427d27571\") " pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.351514 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fce8bb3-2fc9-496a-b0c0-873427d27571-utilities\") pod \"redhat-operators-2gqk6\" (UID: \"9fce8bb3-2fc9-496a-b0c0-873427d27571\") " pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.351556 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fce8bb3-2fc9-496a-b0c0-873427d27571-catalog-content\") pod \"redhat-operators-2gqk6\" (UID: \"9fce8bb3-2fc9-496a-b0c0-873427d27571\") " pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.356735 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fce8bb3-2fc9-496a-b0c0-873427d27571-catalog-content\") pod \"redhat-operators-2gqk6\" (UID: \"9fce8bb3-2fc9-496a-b0c0-873427d27571\") " pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.364493 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.364907 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fce8bb3-2fc9-496a-b0c0-873427d27571-utilities\") pod \"redhat-operators-2gqk6\" (UID: \"9fce8bb3-2fc9-496a-b0c0-873427d27571\") " pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.387923 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lktzd\" (UniqueName: \"kubernetes.io/projected/9fce8bb3-2fc9-496a-b0c0-873427d27571-kube-api-access-lktzd\") pod \"redhat-operators-2gqk6\" (UID: \"9fce8bb3-2fc9-496a-b0c0-873427d27571\") " pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.450354 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjxsm"] Feb 24 09:58:53 crc kubenswrapper[4755]: W0224 09:58:53.470273 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cfb137_48b9_4d3d_ab93_7f688d450d2a.slice/crio-063280ac5adc4dedfb7b645bf15e6bb12fc387bcc33392be5dc53332f3353b13 WatchSource:0}: Error finding container 063280ac5adc4dedfb7b645bf15e6bb12fc387bcc33392be5dc53332f3353b13: Status 404 returned error can't find the container with id 063280ac5adc4dedfb7b645bf15e6bb12fc387bcc33392be5dc53332f3353b13 Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.482219 4755 patch_prober.go:28] interesting pod/downloads-7954f5f757-vhlbl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.482271 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-vhlbl" podUID="d1c02e5f-aebc-4edb-9e99-f26fc84a32ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.482276 4755 patch_prober.go:28] interesting pod/downloads-7954f5f757-vhlbl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.482332 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vhlbl" podUID="d1c02e5f-aebc-4edb-9e99-f26fc84a32ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.490528 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.550826 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7l52t"] Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.551791 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.558450 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7l52t"] Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.656199 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g269s\" (UniqueName: \"kubernetes.io/projected/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-kube-api-access-g269s\") pod \"redhat-operators-7l52t\" (UID: \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\") " pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.656302 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-utilities\") pod \"redhat-operators-7l52t\" (UID: \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\") " pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.656336 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-catalog-content\") pod \"redhat-operators-7l52t\" (UID: \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\") " pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.735518 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.757341 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-utilities\") pod \"redhat-operators-7l52t\" (UID: \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\") " pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.757681 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-catalog-content\") pod \"redhat-operators-7l52t\" (UID: \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\") " pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.757761 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g269s\" (UniqueName: \"kubernetes.io/projected/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-kube-api-access-g269s\") pod \"redhat-operators-7l52t\" (UID: \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\") " pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.758866 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-utilities\") pod \"redhat-operators-7l52t\" (UID: \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\") " pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.759127 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-catalog-content\") pod \"redhat-operators-7l52t\" (UID: \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\") " pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.786247 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g269s\" (UniqueName: \"kubernetes.io/projected/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-kube-api-access-g269s\") pod \"redhat-operators-7l52t\" (UID: \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\") " pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.800768 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2gqk6"] Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.832642 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 09:58:53 crc kubenswrapper[4755]: I0224 09:58:53.875899 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:58:53 crc kubenswrapper[4755]: W0224 09:58:53.891813 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0987648c_428f_426a_9b0f_90087a76e6ed.slice/crio-1162d6bafd07d3a9c9f7c34c9ea1e9871c71f9df51321efc3742aa87e2761c37 WatchSource:0}: Error finding container 1162d6bafd07d3a9c9f7c34c9ea1e9871c71f9df51321efc3742aa87e2761c37: Status 404 returned error can't find the container with id 1162d6bafd07d3a9c9f7c34c9ea1e9871c71f9df51321efc3742aa87e2761c37 Feb 24 09:58:54 crc kubenswrapper[4755]: I0224 09:58:54.217779 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:54 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:54 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:54 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:54 crc kubenswrapper[4755]: I0224 09:58:54.218024 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:58:54 crc kubenswrapper[4755]: I0224 09:58:54.333503 4755 generic.go:334] "Generic (PLEG): container finished" podID="e686c093-6253-42ec-a1a6-7a8553b42533" containerID="635a5c1cda8d7b18267f9d630c229f0a1951414e062df8ddc96c7a4aeaf86c8e" exitCode=0 Feb 24 09:58:54 crc kubenswrapper[4755]: I0224 09:58:54.333553 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e686c093-6253-42ec-a1a6-7a8553b42533","Type":"ContainerDied","Data":"635a5c1cda8d7b18267f9d630c229f0a1951414e062df8ddc96c7a4aeaf86c8e"} Feb 24 09:58:54 crc kubenswrapper[4755]: I0224 09:58:54.341129 4755 generic.go:334] "Generic (PLEG): container finished" podID="9fce8bb3-2fc9-496a-b0c0-873427d27571" containerID="94c500fb7a230622d0f4540b263fd4dbed53d89fb25e6adec538d1eb7438ebd3" exitCode=0 Feb 24 09:58:54 crc kubenswrapper[4755]: I0224 09:58:54.341186 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gqk6" event={"ID":"9fce8bb3-2fc9-496a-b0c0-873427d27571","Type":"ContainerDied","Data":"94c500fb7a230622d0f4540b263fd4dbed53d89fb25e6adec538d1eb7438ebd3"} Feb 24 09:58:54 crc kubenswrapper[4755]: I0224 09:58:54.341209 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gqk6" event={"ID":"9fce8bb3-2fc9-496a-b0c0-873427d27571","Type":"ContainerStarted","Data":"a93faa99bb62bfadd21e1cc0b95a2a647557eaec759206d6cb3279486396b0d8"} Feb 24 09:58:54 crc kubenswrapper[4755]: I0224 09:58:54.346774 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0987648c-428f-426a-9b0f-90087a76e6ed","Type":"ContainerStarted","Data":"1162d6bafd07d3a9c9f7c34c9ea1e9871c71f9df51321efc3742aa87e2761c37"} Feb 24 09:58:54 crc kubenswrapper[4755]: I0224 09:58:54.403643 4755 generic.go:334] "Generic (PLEG): container finished" podID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" containerID="c3d2011ef31ca9d7ec4c0331a97664096a2ec12cb35a4ba415254ab5ff16b134" exitCode=0 Feb 24 09:58:54 crc kubenswrapper[4755]: I0224 09:58:54.404980 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjxsm" event={"ID":"f5cfb137-48b9-4d3d-ab93-7f688d450d2a","Type":"ContainerDied","Data":"c3d2011ef31ca9d7ec4c0331a97664096a2ec12cb35a4ba415254ab5ff16b134"} Feb 24 09:58:54 crc kubenswrapper[4755]: I0224 09:58:54.405030 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjxsm" event={"ID":"f5cfb137-48b9-4d3d-ab93-7f688d450d2a","Type":"ContainerStarted","Data":"063280ac5adc4dedfb7b645bf15e6bb12fc387bcc33392be5dc53332f3353b13"} Feb 24 09:58:54 crc kubenswrapper[4755]: I0224 09:58:54.405042 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7l52t"] Feb 24 09:58:54 crc kubenswrapper[4755]: W0224 09:58:54.422656 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf634ec6f_7ee6_4789_bc6a_e860f86fe6fa.slice/crio-41610d61058d9ce15290b6403f3559002df78d9cf09b8c8cc20dadc5c85ec5d3 WatchSource:0}: Error finding container 41610d61058d9ce15290b6403f3559002df78d9cf09b8c8cc20dadc5c85ec5d3: Status 404 returned error can't find the container with id 41610d61058d9ce15290b6403f3559002df78d9cf09b8c8cc20dadc5c85ec5d3 Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.156681 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39084: no serving certificate available for the kubelet" Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.217240 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:55 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:55 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:55 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.217308 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.427265 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0987648c-428f-426a-9b0f-90087a76e6ed","Type":"ContainerDied","Data":"1b9c400d7018d6cbaa3e213da31115a5e68308c31914eb2a19e4dc437d7f8fe5"} Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.427243 4755 generic.go:334] "Generic (PLEG): container finished" podID="0987648c-428f-426a-9b0f-90087a76e6ed" containerID="1b9c400d7018d6cbaa3e213da31115a5e68308c31914eb2a19e4dc437d7f8fe5" exitCode=0 Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.448676 4755 generic.go:334] "Generic (PLEG): container finished" podID="f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" containerID="8ea0c820e9ba896723fa93e4127eaf06240aa536f0e5c0b8b1310c0f6a537b42" exitCode=0 Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.448938 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7l52t" event={"ID":"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa","Type":"ContainerDied","Data":"8ea0c820e9ba896723fa93e4127eaf06240aa536f0e5c0b8b1310c0f6a537b42"} Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.448969 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7l52t" event={"ID":"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa","Type":"ContainerStarted","Data":"41610d61058d9ce15290b6403f3559002df78d9cf09b8c8cc20dadc5c85ec5d3"} Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.719015 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.804446 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e686c093-6253-42ec-a1a6-7a8553b42533-kube-api-access\") pod \"e686c093-6253-42ec-a1a6-7a8553b42533\" (UID: \"e686c093-6253-42ec-a1a6-7a8553b42533\") " Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.804553 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e686c093-6253-42ec-a1a6-7a8553b42533-kubelet-dir\") pod \"e686c093-6253-42ec-a1a6-7a8553b42533\" (UID: \"e686c093-6253-42ec-a1a6-7a8553b42533\") " Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.804591 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e686c093-6253-42ec-a1a6-7a8553b42533-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e686c093-6253-42ec-a1a6-7a8553b42533" (UID: "e686c093-6253-42ec-a1a6-7a8553b42533"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.804867 4755 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e686c093-6253-42ec-a1a6-7a8553b42533-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.822791 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e686c093-6253-42ec-a1a6-7a8553b42533-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e686c093-6253-42ec-a1a6-7a8553b42533" (UID: "e686c093-6253-42ec-a1a6-7a8553b42533"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:58:55 crc kubenswrapper[4755]: I0224 09:58:55.911211 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e686c093-6253-42ec-a1a6-7a8553b42533-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:58:56 crc kubenswrapper[4755]: I0224 09:58:56.076103 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39096: no serving certificate available for the kubelet" Feb 24 09:58:56 crc kubenswrapper[4755]: I0224 09:58:56.242051 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:56 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:56 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:56 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:56 crc kubenswrapper[4755]: I0224 09:58:56.242120 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:58:56 crc kubenswrapper[4755]: I0224 09:58:56.464530 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e686c093-6253-42ec-a1a6-7a8553b42533","Type":"ContainerDied","Data":"bb35edf08011f22ff1e1ee2252a5bf535f91a3efbf3fe422536cf34cc3457605"} Feb 24 09:58:56 crc kubenswrapper[4755]: I0224 09:58:56.464870 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb35edf08011f22ff1e1ee2252a5bf535f91a3efbf3fe422536cf34cc3457605" Feb 24 09:58:56 crc kubenswrapper[4755]: I0224 09:58:56.464584 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:58:57 crc kubenswrapper[4755]: I0224 09:58:57.218125 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:57 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:57 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:57 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:57 crc kubenswrapper[4755]: I0224 09:58:57.218410 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:58:58 crc kubenswrapper[4755]: I0224 09:58:58.219313 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:58 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:58 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:58 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:58 crc kubenswrapper[4755]: I0224 09:58:58.219398 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:58:58 crc kubenswrapper[4755]: I0224 09:58:58.759626 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lj8d5" Feb 24 09:58:59 crc kubenswrapper[4755]: I0224 09:58:59.217366 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:58:59 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:58:59 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:58:59 crc kubenswrapper[4755]: healthz check failed Feb 24 09:58:59 crc kubenswrapper[4755]: I0224 09:58:59.217452 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:59:00 crc kubenswrapper[4755]: I0224 09:59:00.216371 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:59:00 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:59:00 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:59:00 crc kubenswrapper[4755]: healthz check failed Feb 24 09:59:00 crc kubenswrapper[4755]: I0224 09:59:00.216428 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:59:01 crc kubenswrapper[4755]: I0224 09:59:01.217429 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:59:01 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:59:01 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:59:01 crc kubenswrapper[4755]: healthz check failed Feb 24 09:59:01 crc kubenswrapper[4755]: I0224 09:59:01.217474 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:59:02 crc kubenswrapper[4755]: I0224 09:59:02.213728 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:59:02 crc kubenswrapper[4755]: I0224 09:59:02.216618 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:59:02 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Feb 24 09:59:02 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:59:02 crc kubenswrapper[4755]: healthz check failed Feb 24 09:59:02 crc kubenswrapper[4755]: I0224 09:59:02.216653 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:59:02 crc kubenswrapper[4755]: I0224 09:59:02.327144 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0987648c-428f-426a-9b0f-90087a76e6ed-kube-api-access\") pod \"0987648c-428f-426a-9b0f-90087a76e6ed\" (UID: \"0987648c-428f-426a-9b0f-90087a76e6ed\") " Feb 24 09:59:02 crc kubenswrapper[4755]: I0224 09:59:02.327236 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0987648c-428f-426a-9b0f-90087a76e6ed-kubelet-dir\") pod \"0987648c-428f-426a-9b0f-90087a76e6ed\" (UID: \"0987648c-428f-426a-9b0f-90087a76e6ed\") " Feb 24 09:59:02 crc kubenswrapper[4755]: I0224 09:59:02.327573 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0987648c-428f-426a-9b0f-90087a76e6ed-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0987648c-428f-426a-9b0f-90087a76e6ed" (UID: "0987648c-428f-426a-9b0f-90087a76e6ed"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:59:02 crc kubenswrapper[4755]: I0224 09:59:02.332680 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0987648c-428f-426a-9b0f-90087a76e6ed-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0987648c-428f-426a-9b0f-90087a76e6ed" (UID: "0987648c-428f-426a-9b0f-90087a76e6ed"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:59:02 crc kubenswrapper[4755]: I0224 09:59:02.430000 4755 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0987648c-428f-426a-9b0f-90087a76e6ed-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:02 crc kubenswrapper[4755]: I0224 09:59:02.430080 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0987648c-428f-426a-9b0f-90087a76e6ed-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:02 crc kubenswrapper[4755]: I0224 09:59:02.527967 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0987648c-428f-426a-9b0f-90087a76e6ed","Type":"ContainerDied","Data":"1162d6bafd07d3a9c9f7c34c9ea1e9871c71f9df51321efc3742aa87e2761c37"} Feb 24 09:59:02 crc kubenswrapper[4755]: I0224 09:59:02.528404 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1162d6bafd07d3a9c9f7c34c9ea1e9871c71f9df51321efc3742aa87e2761c37" Feb 24 09:59:02 crc kubenswrapper[4755]: I0224 09:59:02.528377 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:59:03 crc kubenswrapper[4755]: I0224 09:59:03.174514 4755 patch_prober.go:28] interesting pod/console-f9d7485db-fqscc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 24 09:59:03 crc kubenswrapper[4755]: I0224 09:59:03.174572 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fqscc" podUID="c15ecede-c840-4fc8-bc38-a970796c9517" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 24 09:59:03 crc kubenswrapper[4755]: I0224 09:59:03.216729 4755 patch_prober.go:28] interesting pod/router-default-5444994796-jb8zb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:59:03 crc kubenswrapper[4755]: [+]has-synced ok Feb 24 09:59:03 crc kubenswrapper[4755]: [+]process-running ok Feb 24 09:59:03 crc kubenswrapper[4755]: healthz check failed Feb 24 09:59:03 crc kubenswrapper[4755]: I0224 09:59:03.216816 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jb8zb" podUID="b75345bf-b93f-471f-9b11-e5c5695e7e6a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:59:03 crc kubenswrapper[4755]: I0224 09:59:03.492621 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-vhlbl" Feb 24 09:59:04 crc kubenswrapper[4755]: I0224 09:59:04.217657 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:59:04 crc kubenswrapper[4755]: I0224 09:59:04.219949 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jb8zb" Feb 24 09:59:05 crc kubenswrapper[4755]: I0224 09:59:05.421489 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37836: no serving certificate available for the kubelet" Feb 24 09:59:08 crc kubenswrapper[4755]: I0224 09:59:08.820710 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j"] Feb 24 09:59:08 crc kubenswrapper[4755]: I0224 09:59:08.821118 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" podUID="24f886fe-4911-48f1-8aaa-46e0ddabb144" containerName="controller-manager" containerID="cri-o://fa2dc332da46b7c32a5c204abbadba682ffb9c91322d579e74633c01de865508" gracePeriod=30 Feb 24 09:59:08 crc kubenswrapper[4755]: I0224 09:59:08.840162 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz"] Feb 24 09:59:08 crc kubenswrapper[4755]: I0224 09:59:08.840348 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" podUID="d807c0b7-7410-475a-bcde-00d34547ae06" containerName="route-controller-manager" containerID="cri-o://c0bb29480979bafb98473cab708b10c3a5973df95d1d22eb8a6ab5d8c7db4938" gracePeriod=30 Feb 24 09:59:09 crc kubenswrapper[4755]: I0224 09:59:09.563814 4755 generic.go:334] "Generic (PLEG): container finished" podID="d807c0b7-7410-475a-bcde-00d34547ae06" containerID="c0bb29480979bafb98473cab708b10c3a5973df95d1d22eb8a6ab5d8c7db4938" exitCode=0 Feb 24 09:59:09 crc kubenswrapper[4755]: I0224 09:59:09.563901 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" event={"ID":"d807c0b7-7410-475a-bcde-00d34547ae06","Type":"ContainerDied","Data":"c0bb29480979bafb98473cab708b10c3a5973df95d1d22eb8a6ab5d8c7db4938"} Feb 24 09:59:09 crc kubenswrapper[4755]: I0224 09:59:09.565189 4755 generic.go:334] "Generic (PLEG): container finished" podID="24f886fe-4911-48f1-8aaa-46e0ddabb144" containerID="fa2dc332da46b7c32a5c204abbadba682ffb9c91322d579e74633c01de865508" exitCode=0 Feb 24 09:59:09 crc kubenswrapper[4755]: I0224 09:59:09.565218 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" event={"ID":"24f886fe-4911-48f1-8aaa-46e0ddabb144","Type":"ContainerDied","Data":"fa2dc332da46b7c32a5c204abbadba682ffb9c91322d579e74633c01de865508"} Feb 24 09:59:09 crc kubenswrapper[4755]: I0224 09:59:09.805084 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 09:59:10 crc kubenswrapper[4755]: I0224 09:59:10.776021 4755 patch_prober.go:28] interesting pod/controller-manager-6f6cbdcc5d-nsm7j container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.47:8443/healthz\": dial tcp 10.217.0.47:8443: connect: connection refused" start-of-body= Feb 24 09:59:10 crc kubenswrapper[4755]: I0224 09:59:10.776129 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" podUID="24f886fe-4911-48f1-8aaa-46e0ddabb144" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.47:8443/healthz\": dial tcp 10.217.0.47:8443: connect: connection refused" Feb 24 09:59:10 crc kubenswrapper[4755]: I0224 09:59:10.784990 4755 patch_prober.go:28] interesting pod/route-controller-manager-76f8646bc4-spfsz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" start-of-body= Feb 24 09:59:10 crc kubenswrapper[4755]: I0224 09:59:10.785095 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" podUID="d807c0b7-7410-475a-bcde-00d34547ae06" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" Feb 24 09:59:12 crc kubenswrapper[4755]: E0224 09:59:12.904158 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 24 09:59:12 crc kubenswrapper[4755]: E0224 09:59:12.905079 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5lkvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mqjwv_openshift-marketplace(f4344c16-3181-42d3-9d94-6cccd3fe8cc0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 09:59:12 crc kubenswrapper[4755]: E0224 09:59:12.906504 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mqjwv" podUID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" Feb 24 09:59:13 crc kubenswrapper[4755]: I0224 09:59:13.177569 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:59:13 crc kubenswrapper[4755]: I0224 09:59:13.180371 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fqscc" Feb 24 09:59:16 crc kubenswrapper[4755]: E0224 09:59:16.117278 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mqjwv" podUID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" Feb 24 09:59:16 crc kubenswrapper[4755]: E0224 09:59:16.189006 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 24 09:59:16 crc kubenswrapper[4755]: E0224 09:59:16.189248 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lktzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2gqk6_openshift-marketplace(9fce8bb3-2fc9-496a-b0c0-873427d27571): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 09:59:16 crc kubenswrapper[4755]: E0224 09:59:16.190510 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2gqk6" podUID="9fce8bb3-2fc9-496a-b0c0-873427d27571" Feb 24 09:59:17 crc kubenswrapper[4755]: E0224 09:59:17.439329 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2gqk6" podUID="9fce8bb3-2fc9-496a-b0c0-873427d27571" Feb 24 09:59:17 crc kubenswrapper[4755]: E0224 09:59:17.527385 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 24 09:59:17 crc kubenswrapper[4755]: E0224 09:59:17.527616 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6kkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-pjxsm_openshift-marketplace(f5cfb137-48b9-4d3d-ab93-7f688d450d2a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 09:59:17 crc kubenswrapper[4755]: E0224 09:59:17.529527 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-pjxsm" podUID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" Feb 24 09:59:17 crc kubenswrapper[4755]: E0224 09:59:17.615874 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-pjxsm" podUID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" Feb 24 09:59:17 crc kubenswrapper[4755]: E0224 09:59:17.623030 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 24 09:59:17 crc kubenswrapper[4755]: E0224 09:59:17.623191 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzpcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6gt9z_openshift-marketplace(2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 09:59:17 crc kubenswrapper[4755]: E0224 09:59:17.624558 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6gt9z" podUID="2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.724277 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.748778 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.751393 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-667d4d966d-qf5k8"] Feb 24 09:59:17 crc kubenswrapper[4755]: E0224 09:59:17.751626 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d807c0b7-7410-475a-bcde-00d34547ae06" containerName="route-controller-manager" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.751641 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d807c0b7-7410-475a-bcde-00d34547ae06" containerName="route-controller-manager" Feb 24 09:59:17 crc kubenswrapper[4755]: E0224 09:59:17.751658 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0987648c-428f-426a-9b0f-90087a76e6ed" containerName="pruner" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.751667 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0987648c-428f-426a-9b0f-90087a76e6ed" containerName="pruner" Feb 24 09:59:17 crc kubenswrapper[4755]: E0224 09:59:17.751682 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f886fe-4911-48f1-8aaa-46e0ddabb144" containerName="controller-manager" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.751691 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f886fe-4911-48f1-8aaa-46e0ddabb144" containerName="controller-manager" Feb 24 09:59:17 crc kubenswrapper[4755]: E0224 09:59:17.751704 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e686c093-6253-42ec-a1a6-7a8553b42533" containerName="pruner" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.751712 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e686c093-6253-42ec-a1a6-7a8553b42533" containerName="pruner" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.751839 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f886fe-4911-48f1-8aaa-46e0ddabb144" containerName="controller-manager" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.751853 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0987648c-428f-426a-9b0f-90087a76e6ed" containerName="pruner" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.751865 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d807c0b7-7410-475a-bcde-00d34547ae06" containerName="route-controller-manager" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.751877 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="e686c093-6253-42ec-a1a6-7a8553b42533" containerName="pruner" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.752301 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.768298 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-667d4d966d-qf5k8"] Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.891107 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24f886fe-4911-48f1-8aaa-46e0ddabb144-serving-cert\") pod \"24f886fe-4911-48f1-8aaa-46e0ddabb144\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.891596 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-proxy-ca-bundles\") pod \"24f886fe-4911-48f1-8aaa-46e0ddabb144\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.891695 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r96sx\" (UniqueName: \"kubernetes.io/projected/d807c0b7-7410-475a-bcde-00d34547ae06-kube-api-access-r96sx\") pod \"d807c0b7-7410-475a-bcde-00d34547ae06\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.891785 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d807c0b7-7410-475a-bcde-00d34547ae06-serving-cert\") pod \"d807c0b7-7410-475a-bcde-00d34547ae06\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.891864 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d807c0b7-7410-475a-bcde-00d34547ae06-config\") pod \"d807c0b7-7410-475a-bcde-00d34547ae06\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.891935 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-client-ca\") pod \"24f886fe-4911-48f1-8aaa-46e0ddabb144\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.892003 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-config\") pod \"24f886fe-4911-48f1-8aaa-46e0ddabb144\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.892089 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjhh7\" (UniqueName: \"kubernetes.io/projected/24f886fe-4911-48f1-8aaa-46e0ddabb144-kube-api-access-rjhh7\") pod \"24f886fe-4911-48f1-8aaa-46e0ddabb144\" (UID: \"24f886fe-4911-48f1-8aaa-46e0ddabb144\") " Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.892190 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d807c0b7-7410-475a-bcde-00d34547ae06-client-ca\") pod \"d807c0b7-7410-475a-bcde-00d34547ae06\" (UID: \"d807c0b7-7410-475a-bcde-00d34547ae06\") " Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.892398 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e34fbb5a-81ec-442b-8b8d-f078ad03b517-serving-cert\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.892552 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgv7g\" (UniqueName: \"kubernetes.io/projected/e34fbb5a-81ec-442b-8b8d-f078ad03b517-kube-api-access-jgv7g\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.892645 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-config\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.892757 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-client-ca\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.892829 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-proxy-ca-bundles\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.892104 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "24f886fe-4911-48f1-8aaa-46e0ddabb144" (UID: "24f886fe-4911-48f1-8aaa-46e0ddabb144"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.892588 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d807c0b7-7410-475a-bcde-00d34547ae06-config" (OuterVolumeSpecName: "config") pod "d807c0b7-7410-475a-bcde-00d34547ae06" (UID: "d807c0b7-7410-475a-bcde-00d34547ae06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.893297 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d807c0b7-7410-475a-bcde-00d34547ae06-client-ca" (OuterVolumeSpecName: "client-ca") pod "d807c0b7-7410-475a-bcde-00d34547ae06" (UID: "d807c0b7-7410-475a-bcde-00d34547ae06"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.893496 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-config" (OuterVolumeSpecName: "config") pod "24f886fe-4911-48f1-8aaa-46e0ddabb144" (UID: "24f886fe-4911-48f1-8aaa-46e0ddabb144"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.894086 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-client-ca" (OuterVolumeSpecName: "client-ca") pod "24f886fe-4911-48f1-8aaa-46e0ddabb144" (UID: "24f886fe-4911-48f1-8aaa-46e0ddabb144"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.895355 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24f886fe-4911-48f1-8aaa-46e0ddabb144-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "24f886fe-4911-48f1-8aaa-46e0ddabb144" (UID: "24f886fe-4911-48f1-8aaa-46e0ddabb144"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.895863 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d807c0b7-7410-475a-bcde-00d34547ae06-kube-api-access-r96sx" (OuterVolumeSpecName: "kube-api-access-r96sx") pod "d807c0b7-7410-475a-bcde-00d34547ae06" (UID: "d807c0b7-7410-475a-bcde-00d34547ae06"). InnerVolumeSpecName "kube-api-access-r96sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.896251 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d807c0b7-7410-475a-bcde-00d34547ae06-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d807c0b7-7410-475a-bcde-00d34547ae06" (UID: "d807c0b7-7410-475a-bcde-00d34547ae06"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.896754 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24f886fe-4911-48f1-8aaa-46e0ddabb144-kube-api-access-rjhh7" (OuterVolumeSpecName: "kube-api-access-rjhh7") pod "24f886fe-4911-48f1-8aaa-46e0ddabb144" (UID: "24f886fe-4911-48f1-8aaa-46e0ddabb144"). InnerVolumeSpecName "kube-api-access-rjhh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.993429 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-client-ca\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.993468 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-proxy-ca-bundles\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.993504 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e34fbb5a-81ec-442b-8b8d-f078ad03b517-serving-cert\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.993606 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgv7g\" (UniqueName: \"kubernetes.io/projected/e34fbb5a-81ec-442b-8b8d-f078ad03b517-kube-api-access-jgv7g\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.995400 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-client-ca\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.993635 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-config\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.995726 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24f886fe-4911-48f1-8aaa-46e0ddabb144-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.995747 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.995762 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r96sx\" (UniqueName: \"kubernetes.io/projected/d807c0b7-7410-475a-bcde-00d34547ae06-kube-api-access-r96sx\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.995779 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d807c0b7-7410-475a-bcde-00d34547ae06-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.995789 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d807c0b7-7410-475a-bcde-00d34547ae06-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.995798 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.995808 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f886fe-4911-48f1-8aaa-46e0ddabb144-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.995821 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjhh7\" (UniqueName: \"kubernetes.io/projected/24f886fe-4911-48f1-8aaa-46e0ddabb144-kube-api-access-rjhh7\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.995831 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d807c0b7-7410-475a-bcde-00d34547ae06-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:17 crc kubenswrapper[4755]: I0224 09:59:17.999046 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-proxy-ca-bundles\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.006911 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-config\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.013959 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgv7g\" (UniqueName: \"kubernetes.io/projected/e34fbb5a-81ec-442b-8b8d-f078ad03b517-kube-api-access-jgv7g\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.017746 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e34fbb5a-81ec-442b-8b8d-f078ad03b517-serving-cert\") pod \"controller-manager-667d4d966d-qf5k8\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.084754 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.336507 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-667d4d966d-qf5k8"] Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.621694 4755 generic.go:334] "Generic (PLEG): container finished" podID="d61d7a2b-f731-4982-bab4-8c6db2c8c963" containerID="e63839b45457eb18a9c327d85b96f64561e38c1a39d0ab8677b0f9349a84360e" exitCode=0 Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.621772 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x69lf" event={"ID":"d61d7a2b-f731-4982-bab4-8c6db2c8c963","Type":"ContainerDied","Data":"e63839b45457eb18a9c327d85b96f64561e38c1a39d0ab8677b0f9349a84360e"} Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.623912 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" event={"ID":"d807c0b7-7410-475a-bcde-00d34547ae06","Type":"ContainerDied","Data":"656d712876af17c46f63da2e620865769ad98ec06bc2a19fae64089f80c97774"} Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.623981 4755 scope.go:117] "RemoveContainer" containerID="c0bb29480979bafb98473cab708b10c3a5973df95d1d22eb8a6ab5d8c7db4938" Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.623938 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz" Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.631654 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.631655 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j" event={"ID":"24f886fe-4911-48f1-8aaa-46e0ddabb144","Type":"ContainerDied","Data":"51cf0fd19d85380a570707e792596284e788aed18799a5aaa5517ae6f5837c6b"} Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.639209 4755 generic.go:334] "Generic (PLEG): container finished" podID="47b5e8fc-e79e-4cd2-906b-0d8116e4d608" containerID="9999af3104d764c727499e2901ab07fe42a27bf7b84230b68ea3bf6f6310a7de" exitCode=0 Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.640528 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkgvw" event={"ID":"47b5e8fc-e79e-4cd2-906b-0d8116e4d608","Type":"ContainerDied","Data":"9999af3104d764c727499e2901ab07fe42a27bf7b84230b68ea3bf6f6310a7de"} Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.643298 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" event={"ID":"e34fbb5a-81ec-442b-8b8d-f078ad03b517","Type":"ContainerStarted","Data":"dd5a53acbd46c5f4a4ee1e149b9056dd8724e1ba285fecece5b84bc64908d2e0"} Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.643351 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" event={"ID":"e34fbb5a-81ec-442b-8b8d-f078ad03b517","Type":"ContainerStarted","Data":"22389f74fa09068759363477602118dfe11bd2ba65d36170bd8fbe0ef6cec202"} Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.643631 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.647888 4755 scope.go:117] "RemoveContainer" containerID="fa2dc332da46b7c32a5c204abbadba682ffb9c91322d579e74633c01de865508" Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.649042 4755 generic.go:334] "Generic (PLEG): container finished" podID="f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" containerID="8d5f3021dc7e21efba2be5ff4cd7c1eef04b360380c65c20628367a0b049f456" exitCode=0 Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.649090 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7l52t" event={"ID":"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa","Type":"ContainerDied","Data":"8d5f3021dc7e21efba2be5ff4cd7c1eef04b360380c65c20628367a0b049f456"} Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.651342 4755 generic.go:334] "Generic (PLEG): container finished" podID="d7c94c8e-698e-4c14-ae46-16f03e666f27" containerID="c64e85b262a640cff3148dc9e0c1f139a3bd48fbe74dc342c5e0a7b54e49751c" exitCode=0 Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.652126 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tztw" event={"ID":"d7c94c8e-698e-4c14-ae46-16f03e666f27","Type":"ContainerDied","Data":"c64e85b262a640cff3148dc9e0c1f139a3bd48fbe74dc342c5e0a7b54e49751c"} Feb 24 09:59:18 crc kubenswrapper[4755]: E0224 09:59:18.652579 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6gt9z" podUID="2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.653130 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.672642 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j"] Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.676347 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f6cbdcc5d-nsm7j"] Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.680735 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz"] Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.684819 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f8646bc4-spfsz"] Feb 24 09:59:18 crc kubenswrapper[4755]: I0224 09:59:18.694460 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" podStartSLOduration=10.694429422 podStartE2EDuration="10.694429422s" podCreationTimestamp="2026-02-24 09:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:59:18.693043382 +0000 UTC m=+263.149565945" watchObservedRunningTime="2026-02-24 09:59:18.694429422 +0000 UTC m=+263.150951965" Feb 24 09:59:19 crc kubenswrapper[4755]: I0224 09:59:19.660732 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkgvw" event={"ID":"47b5e8fc-e79e-4cd2-906b-0d8116e4d608","Type":"ContainerStarted","Data":"7c4052db91abf87984da7cb7d75a28ab8d421f9d0eb6df3cfb2e27c96ad2627b"} Feb 24 09:59:19 crc kubenswrapper[4755]: I0224 09:59:19.663386 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7l52t" event={"ID":"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa","Type":"ContainerStarted","Data":"b257035a3d896755e4e98e4a6b39b7b88b15862195f51af189948207a37adc95"} Feb 24 09:59:19 crc kubenswrapper[4755]: I0224 09:59:19.667086 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tztw" event={"ID":"d7c94c8e-698e-4c14-ae46-16f03e666f27","Type":"ContainerStarted","Data":"52b23cd4399738acbc9a9ef25a598fa59518d4ca84660fe0aa47b96edfbf68ef"} Feb 24 09:59:19 crc kubenswrapper[4755]: I0224 09:59:19.669596 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x69lf" event={"ID":"d61d7a2b-f731-4982-bab4-8c6db2c8c963","Type":"ContainerStarted","Data":"cfed8176e9cd19acdb935f513bc3bbfd3deecb488c2c5e241e1e78515b6d4045"} Feb 24 09:59:19 crc kubenswrapper[4755]: I0224 09:59:19.681299 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bkgvw" podStartSLOduration=1.832026251 podStartE2EDuration="29.681279802s" podCreationTimestamp="2026-02-24 09:58:50 +0000 UTC" firstStartedPulling="2026-02-24 09:58:51.231374951 +0000 UTC m=+235.687897494" lastFinishedPulling="2026-02-24 09:59:19.080628502 +0000 UTC m=+263.537151045" observedRunningTime="2026-02-24 09:59:19.680771877 +0000 UTC m=+264.137294420" watchObservedRunningTime="2026-02-24 09:59:19.681279802 +0000 UTC m=+264.137802335" Feb 24 09:59:19 crc kubenswrapper[4755]: I0224 09:59:19.699115 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x69lf" podStartSLOduration=1.848427856 podStartE2EDuration="29.699098109s" podCreationTimestamp="2026-02-24 09:58:50 +0000 UTC" firstStartedPulling="2026-02-24 09:58:51.252496325 +0000 UTC m=+235.709018868" lastFinishedPulling="2026-02-24 09:59:19.103166578 +0000 UTC m=+263.559689121" observedRunningTime="2026-02-24 09:59:19.698265315 +0000 UTC m=+264.154787858" watchObservedRunningTime="2026-02-24 09:59:19.699098109 +0000 UTC m=+264.155620652" Feb 24 09:59:19 crc kubenswrapper[4755]: I0224 09:59:19.717745 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2tztw" podStartSLOduration=2.93429322 podStartE2EDuration="29.71772727s" podCreationTimestamp="2026-02-24 09:58:50 +0000 UTC" firstStartedPulling="2026-02-24 09:58:52.282666393 +0000 UTC m=+236.739188936" lastFinishedPulling="2026-02-24 09:59:19.066100443 +0000 UTC m=+263.522622986" observedRunningTime="2026-02-24 09:59:19.712828666 +0000 UTC m=+264.169351209" watchObservedRunningTime="2026-02-24 09:59:19.71772727 +0000 UTC m=+264.174249813" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.322854 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24f886fe-4911-48f1-8aaa-46e0ddabb144" path="/var/lib/kubelet/pods/24f886fe-4911-48f1-8aaa-46e0ddabb144/volumes" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.323612 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d807c0b7-7410-475a-bcde-00d34547ae06" path="/var/lib/kubelet/pods/d807c0b7-7410-475a-bcde-00d34547ae06/volumes" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.455351 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7l52t" podStartSLOduration=3.744390007 podStartE2EDuration="27.455328169s" podCreationTimestamp="2026-02-24 09:58:53 +0000 UTC" firstStartedPulling="2026-02-24 09:58:55.454117566 +0000 UTC m=+239.910640109" lastFinishedPulling="2026-02-24 09:59:19.165055738 +0000 UTC m=+263.621578271" observedRunningTime="2026-02-24 09:59:19.743177643 +0000 UTC m=+264.199700196" watchObservedRunningTime="2026-02-24 09:59:20.455328169 +0000 UTC m=+264.911850712" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.457248 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h"] Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.458025 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.461517 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.462181 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.462230 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.462727 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.464240 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.464955 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.473415 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h"] Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.477004 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.477089 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.567529 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d25739-cdd9-4873-a425-2c5f3951d70d-serving-cert\") pod \"route-controller-manager-766f7474f6-qth5h\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.567815 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d25739-cdd9-4873-a425-2c5f3951d70d-client-ca\") pod \"route-controller-manager-766f7474f6-qth5h\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.567930 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d25739-cdd9-4873-a425-2c5f3951d70d-config\") pod \"route-controller-manager-766f7474f6-qth5h\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.568130 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkbsx\" (UniqueName: \"kubernetes.io/projected/13d25739-cdd9-4873-a425-2c5f3951d70d-kube-api-access-tkbsx\") pod \"route-controller-manager-766f7474f6-qth5h\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.669182 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d25739-cdd9-4873-a425-2c5f3951d70d-client-ca\") pod \"route-controller-manager-766f7474f6-qth5h\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.669228 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d25739-cdd9-4873-a425-2c5f3951d70d-config\") pod \"route-controller-manager-766f7474f6-qth5h\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.669323 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkbsx\" (UniqueName: \"kubernetes.io/projected/13d25739-cdd9-4873-a425-2c5f3951d70d-kube-api-access-tkbsx\") pod \"route-controller-manager-766f7474f6-qth5h\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.669376 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d25739-cdd9-4873-a425-2c5f3951d70d-serving-cert\") pod \"route-controller-manager-766f7474f6-qth5h\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.670426 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d25739-cdd9-4873-a425-2c5f3951d70d-client-ca\") pod \"route-controller-manager-766f7474f6-qth5h\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.670868 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d25739-cdd9-4873-a425-2c5f3951d70d-config\") pod \"route-controller-manager-766f7474f6-qth5h\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.674702 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d25739-cdd9-4873-a425-2c5f3951d70d-serving-cert\") pod \"route-controller-manager-766f7474f6-qth5h\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.688982 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkbsx\" (UniqueName: \"kubernetes.io/projected/13d25739-cdd9-4873-a425-2c5f3951d70d-kube-api-access-tkbsx\") pod \"route-controller-manager-766f7474f6-qth5h\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.733174 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.733239 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.770790 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.895336 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:59:20 crc kubenswrapper[4755]: I0224 09:59:20.895395 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:59:21 crc kubenswrapper[4755]: I0224 09:59:21.212100 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h"] Feb 24 09:59:21 crc kubenswrapper[4755]: I0224 09:59:21.630647 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bkgvw" podUID="47b5e8fc-e79e-4cd2-906b-0d8116e4d608" containerName="registry-server" probeResult="failure" output=< Feb 24 09:59:21 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Feb 24 09:59:21 crc kubenswrapper[4755]: > Feb 24 09:59:21 crc kubenswrapper[4755]: I0224 09:59:21.685397 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" event={"ID":"13d25739-cdd9-4873-a425-2c5f3951d70d","Type":"ContainerStarted","Data":"86de10dfe82bd067cf5f0f8a94bd9e3ba23c85986f969504e60b53791afe2c48"} Feb 24 09:59:21 crc kubenswrapper[4755]: I0224 09:59:21.685458 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" event={"ID":"13d25739-cdd9-4873-a425-2c5f3951d70d","Type":"ContainerStarted","Data":"74feafcf5a0bc2185310070005c742d9cda8cb3b59beab4d2a381ba9a83d385c"} Feb 24 09:59:21 crc kubenswrapper[4755]: I0224 09:59:21.694715 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:59:21 crc kubenswrapper[4755]: I0224 09:59:21.694764 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:59:21 crc kubenswrapper[4755]: I0224 09:59:21.700848 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" podStartSLOduration=13.700833466 podStartE2EDuration="13.700833466s" podCreationTimestamp="2026-02-24 09:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:59:21.697411476 +0000 UTC m=+266.153934019" watchObservedRunningTime="2026-02-24 09:59:21.700833466 +0000 UTC m=+266.157356009" Feb 24 09:59:21 crc kubenswrapper[4755]: I0224 09:59:21.784285 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-x69lf" podUID="d61d7a2b-f731-4982-bab4-8c6db2c8c963" containerName="registry-server" probeResult="failure" output=< Feb 24 09:59:21 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Feb 24 09:59:21 crc kubenswrapper[4755]: > Feb 24 09:59:21 crc kubenswrapper[4755]: I0224 09:59:21.942297 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-2tztw" podUID="d7c94c8e-698e-4c14-ae46-16f03e666f27" containerName="registry-server" probeResult="failure" output=< Feb 24 09:59:21 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Feb 24 09:59:21 crc kubenswrapper[4755]: > Feb 24 09:59:22 crc kubenswrapper[4755]: I0224 09:59:22.703769 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:22 crc kubenswrapper[4755]: I0224 09:59:22.715879 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:23 crc kubenswrapper[4755]: I0224 09:59:23.331578 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tft86" Feb 24 09:59:23 crc kubenswrapper[4755]: I0224 09:59:23.876119 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:59:23 crc kubenswrapper[4755]: I0224 09:59:23.876194 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:59:24 crc kubenswrapper[4755]: I0224 09:59:24.673486 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 09:59:24 crc kubenswrapper[4755]: I0224 09:59:24.674842 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:59:24 crc kubenswrapper[4755]: I0224 09:59:24.677650 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 09:59:24 crc kubenswrapper[4755]: I0224 09:59:24.677932 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 09:59:24 crc kubenswrapper[4755]: I0224 09:59:24.683804 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 09:59:24 crc kubenswrapper[4755]: I0224 09:59:24.723013 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c4730-43e9-4261-bdb1-a4e88e2a318f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d86c4730-43e9-4261-bdb1-a4e88e2a318f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:59:24 crc kubenswrapper[4755]: I0224 09:59:24.723351 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c4730-43e9-4261-bdb1-a4e88e2a318f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d86c4730-43e9-4261-bdb1-a4e88e2a318f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:59:24 crc kubenswrapper[4755]: I0224 09:59:24.824293 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c4730-43e9-4261-bdb1-a4e88e2a318f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d86c4730-43e9-4261-bdb1-a4e88e2a318f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:59:24 crc kubenswrapper[4755]: I0224 09:59:24.825340 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c4730-43e9-4261-bdb1-a4e88e2a318f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d86c4730-43e9-4261-bdb1-a4e88e2a318f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:59:24 crc kubenswrapper[4755]: I0224 09:59:24.824790 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c4730-43e9-4261-bdb1-a4e88e2a318f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d86c4730-43e9-4261-bdb1-a4e88e2a318f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:59:24 crc kubenswrapper[4755]: I0224 09:59:24.849085 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c4730-43e9-4261-bdb1-a4e88e2a318f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d86c4730-43e9-4261-bdb1-a4e88e2a318f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:59:24 crc kubenswrapper[4755]: I0224 09:59:24.920182 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7l52t" podUID="f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" containerName="registry-server" probeResult="failure" output=< Feb 24 09:59:24 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Feb 24 09:59:24 crc kubenswrapper[4755]: > Feb 24 09:59:24 crc kubenswrapper[4755]: I0224 09:59:24.993331 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:59:25 crc kubenswrapper[4755]: I0224 09:59:25.449011 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 09:59:25 crc kubenswrapper[4755]: I0224 09:59:25.721056 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d86c4730-43e9-4261-bdb1-a4e88e2a318f","Type":"ContainerStarted","Data":"f14c85f4ae5682313bc6ac8ecd5ab928af190e1db2f768002c6b24b99d19a09b"} Feb 24 09:59:26 crc kubenswrapper[4755]: I0224 09:59:26.732631 4755 generic.go:334] "Generic (PLEG): container finished" podID="d86c4730-43e9-4261-bdb1-a4e88e2a318f" containerID="9dbf7df44356013deac30c90a9563bf9bba0467d0d9ef03d8bc73a74b11bdacb" exitCode=0 Feb 24 09:59:26 crc kubenswrapper[4755]: I0224 09:59:26.732672 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d86c4730-43e9-4261-bdb1-a4e88e2a318f","Type":"ContainerDied","Data":"9dbf7df44356013deac30c90a9563bf9bba0467d0d9ef03d8bc73a74b11bdacb"} Feb 24 09:59:28 crc kubenswrapper[4755]: I0224 09:59:28.104211 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:59:28 crc kubenswrapper[4755]: I0224 09:59:28.167779 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c4730-43e9-4261-bdb1-a4e88e2a318f-kube-api-access\") pod \"d86c4730-43e9-4261-bdb1-a4e88e2a318f\" (UID: \"d86c4730-43e9-4261-bdb1-a4e88e2a318f\") " Feb 24 09:59:28 crc kubenswrapper[4755]: I0224 09:59:28.167864 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c4730-43e9-4261-bdb1-a4e88e2a318f-kubelet-dir\") pod \"d86c4730-43e9-4261-bdb1-a4e88e2a318f\" (UID: \"d86c4730-43e9-4261-bdb1-a4e88e2a318f\") " Feb 24 09:59:28 crc kubenswrapper[4755]: I0224 09:59:28.168023 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d86c4730-43e9-4261-bdb1-a4e88e2a318f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d86c4730-43e9-4261-bdb1-a4e88e2a318f" (UID: "d86c4730-43e9-4261-bdb1-a4e88e2a318f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:59:28 crc kubenswrapper[4755]: I0224 09:59:28.168408 4755 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c4730-43e9-4261-bdb1-a4e88e2a318f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:28 crc kubenswrapper[4755]: I0224 09:59:28.172866 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d86c4730-43e9-4261-bdb1-a4e88e2a318f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d86c4730-43e9-4261-bdb1-a4e88e2a318f" (UID: "d86c4730-43e9-4261-bdb1-a4e88e2a318f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:59:28 crc kubenswrapper[4755]: I0224 09:59:28.269930 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c4730-43e9-4261-bdb1-a4e88e2a318f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:28 crc kubenswrapper[4755]: I0224 09:59:28.749870 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d86c4730-43e9-4261-bdb1-a4e88e2a318f","Type":"ContainerDied","Data":"f14c85f4ae5682313bc6ac8ecd5ab928af190e1db2f768002c6b24b99d19a09b"} Feb 24 09:59:28 crc kubenswrapper[4755]: I0224 09:59:28.749993 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:59:28 crc kubenswrapper[4755]: I0224 09:59:28.750438 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f14c85f4ae5682313bc6ac8ecd5ab928af190e1db2f768002c6b24b99d19a09b" Feb 24 09:59:28 crc kubenswrapper[4755]: I0224 09:59:28.804766 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-667d4d966d-qf5k8"] Feb 24 09:59:28 crc kubenswrapper[4755]: I0224 09:59:28.805318 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" podUID="e34fbb5a-81ec-442b-8b8d-f078ad03b517" containerName="controller-manager" containerID="cri-o://dd5a53acbd46c5f4a4ee1e149b9056dd8724e1ba285fecece5b84bc64908d2e0" gracePeriod=30 Feb 24 09:59:28 crc kubenswrapper[4755]: I0224 09:59:28.900584 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h"] Feb 24 09:59:28 crc kubenswrapper[4755]: I0224 09:59:28.900855 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" podUID="13d25739-cdd9-4873-a425-2c5f3951d70d" containerName="route-controller-manager" containerID="cri-o://86de10dfe82bd067cf5f0f8a94bd9e3ba23c85986f969504e60b53791afe2c48" gracePeriod=30 Feb 24 09:59:29 crc kubenswrapper[4755]: I0224 09:59:29.758494 4755 generic.go:334] "Generic (PLEG): container finished" podID="e34fbb5a-81ec-442b-8b8d-f078ad03b517" containerID="dd5a53acbd46c5f4a4ee1e149b9056dd8724e1ba285fecece5b84bc64908d2e0" exitCode=0 Feb 24 09:59:29 crc kubenswrapper[4755]: I0224 09:59:29.758686 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" event={"ID":"e34fbb5a-81ec-442b-8b8d-f078ad03b517","Type":"ContainerDied","Data":"dd5a53acbd46c5f4a4ee1e149b9056dd8724e1ba285fecece5b84bc64908d2e0"} Feb 24 09:59:29 crc kubenswrapper[4755]: I0224 09:59:29.760201 4755 generic.go:334] "Generic (PLEG): container finished" podID="13d25739-cdd9-4873-a425-2c5f3951d70d" containerID="86de10dfe82bd067cf5f0f8a94bd9e3ba23c85986f969504e60b53791afe2c48" exitCode=0 Feb 24 09:59:29 crc kubenswrapper[4755]: I0224 09:59:29.760230 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" event={"ID":"13d25739-cdd9-4873-a425-2c5f3951d70d","Type":"ContainerDied","Data":"86de10dfe82bd067cf5f0f8a94bd9e3ba23c85986f969504e60b53791afe2c48"} Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.545516 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.589650 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.600538 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.607236 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.645618 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn"] Feb 24 09:59:30 crc kubenswrapper[4755]: E0224 09:59:30.645869 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d25739-cdd9-4873-a425-2c5f3951d70d" containerName="route-controller-manager" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.645884 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d25739-cdd9-4873-a425-2c5f3951d70d" containerName="route-controller-manager" Feb 24 09:59:30 crc kubenswrapper[4755]: E0224 09:59:30.645904 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d86c4730-43e9-4261-bdb1-a4e88e2a318f" containerName="pruner" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.645913 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d86c4730-43e9-4261-bdb1-a4e88e2a318f" containerName="pruner" Feb 24 09:59:30 crc kubenswrapper[4755]: E0224 09:59:30.645924 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34fbb5a-81ec-442b-8b8d-f078ad03b517" containerName="controller-manager" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.645933 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34fbb5a-81ec-442b-8b8d-f078ad03b517" containerName="controller-manager" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.646045 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d25739-cdd9-4873-a425-2c5f3951d70d" containerName="route-controller-manager" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.646059 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d86c4730-43e9-4261-bdb1-a4e88e2a318f" containerName="pruner" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.646107 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34fbb5a-81ec-442b-8b8d-f078ad03b517" containerName="controller-manager" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.646523 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.662877 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn"] Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.701082 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d25739-cdd9-4873-a425-2c5f3951d70d-serving-cert\") pod \"13d25739-cdd9-4873-a425-2c5f3951d70d\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.701129 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d25739-cdd9-4873-a425-2c5f3951d70d-client-ca\") pod \"13d25739-cdd9-4873-a425-2c5f3951d70d\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.701184 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkbsx\" (UniqueName: \"kubernetes.io/projected/13d25739-cdd9-4873-a425-2c5f3951d70d-kube-api-access-tkbsx\") pod \"13d25739-cdd9-4873-a425-2c5f3951d70d\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.701262 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-client-ca\") pod \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.701291 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e34fbb5a-81ec-442b-8b8d-f078ad03b517-serving-cert\") pod \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.701313 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-config\") pod \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.701345 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgv7g\" (UniqueName: \"kubernetes.io/projected/e34fbb5a-81ec-442b-8b8d-f078ad03b517-kube-api-access-jgv7g\") pod \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.701394 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-proxy-ca-bundles\") pod \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\" (UID: \"e34fbb5a-81ec-442b-8b8d-f078ad03b517\") " Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.701418 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d25739-cdd9-4873-a425-2c5f3951d70d-config\") pod \"13d25739-cdd9-4873-a425-2c5f3951d70d\" (UID: \"13d25739-cdd9-4873-a425-2c5f3951d70d\") " Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.701613 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d335c45-4b85-41ff-a15e-6666b28ab269-serving-cert\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.701666 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-client-ca\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.701704 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-config\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.701797 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-proxy-ca-bundles\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.701831 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4p5c\" (UniqueName: \"kubernetes.io/projected/4d335c45-4b85-41ff-a15e-6666b28ab269-kube-api-access-j4p5c\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.702439 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13d25739-cdd9-4873-a425-2c5f3951d70d-client-ca" (OuterVolumeSpecName: "client-ca") pod "13d25739-cdd9-4873-a425-2c5f3951d70d" (UID: "13d25739-cdd9-4873-a425-2c5f3951d70d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.702646 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-client-ca" (OuterVolumeSpecName: "client-ca") pod "e34fbb5a-81ec-442b-8b8d-f078ad03b517" (UID: "e34fbb5a-81ec-442b-8b8d-f078ad03b517"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.703317 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e34fbb5a-81ec-442b-8b8d-f078ad03b517" (UID: "e34fbb5a-81ec-442b-8b8d-f078ad03b517"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.703659 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-config" (OuterVolumeSpecName: "config") pod "e34fbb5a-81ec-442b-8b8d-f078ad03b517" (UID: "e34fbb5a-81ec-442b-8b8d-f078ad03b517"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.704202 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13d25739-cdd9-4873-a425-2c5f3951d70d-config" (OuterVolumeSpecName: "config") pod "13d25739-cdd9-4873-a425-2c5f3951d70d" (UID: "13d25739-cdd9-4873-a425-2c5f3951d70d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.707216 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34fbb5a-81ec-442b-8b8d-f078ad03b517-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e34fbb5a-81ec-442b-8b8d-f078ad03b517" (UID: "e34fbb5a-81ec-442b-8b8d-f078ad03b517"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.708721 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d25739-cdd9-4873-a425-2c5f3951d70d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "13d25739-cdd9-4873-a425-2c5f3951d70d" (UID: "13d25739-cdd9-4873-a425-2c5f3951d70d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.709490 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d25739-cdd9-4873-a425-2c5f3951d70d-kube-api-access-tkbsx" (OuterVolumeSpecName: "kube-api-access-tkbsx") pod "13d25739-cdd9-4873-a425-2c5f3951d70d" (UID: "13d25739-cdd9-4873-a425-2c5f3951d70d"). InnerVolumeSpecName "kube-api-access-tkbsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.713541 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e34fbb5a-81ec-442b-8b8d-f078ad03b517-kube-api-access-jgv7g" (OuterVolumeSpecName: "kube-api-access-jgv7g") pod "e34fbb5a-81ec-442b-8b8d-f078ad03b517" (UID: "e34fbb5a-81ec-442b-8b8d-f078ad03b517"). InnerVolumeSpecName "kube-api-access-jgv7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.767270 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" event={"ID":"e34fbb5a-81ec-442b-8b8d-f078ad03b517","Type":"ContainerDied","Data":"22389f74fa09068759363477602118dfe11bd2ba65d36170bd8fbe0ef6cec202"} Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.767321 4755 scope.go:117] "RemoveContainer" containerID="dd5a53acbd46c5f4a4ee1e149b9056dd8724e1ba285fecece5b84bc64908d2e0" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.767326 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-667d4d966d-qf5k8" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.773908 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.773956 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h" event={"ID":"13d25739-cdd9-4873-a425-2c5f3951d70d","Type":"ContainerDied","Data":"74feafcf5a0bc2185310070005c742d9cda8cb3b59beab4d2a381ba9a83d385c"} Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.779815 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.786160 4755 scope.go:117] "RemoveContainer" containerID="86de10dfe82bd067cf5f0f8a94bd9e3ba23c85986f969504e60b53791afe2c48" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.802411 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-config\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.802493 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-proxy-ca-bundles\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.802515 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4p5c\" (UniqueName: \"kubernetes.io/projected/4d335c45-4b85-41ff-a15e-6666b28ab269-kube-api-access-j4p5c\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.802550 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d335c45-4b85-41ff-a15e-6666b28ab269-serving-cert\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.802577 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-client-ca\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.802618 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.802627 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e34fbb5a-81ec-442b-8b8d-f078ad03b517-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.802637 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.802647 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgv7g\" (UniqueName: \"kubernetes.io/projected/e34fbb5a-81ec-442b-8b8d-f078ad03b517-kube-api-access-jgv7g\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.802657 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e34fbb5a-81ec-442b-8b8d-f078ad03b517-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.802665 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d25739-cdd9-4873-a425-2c5f3951d70d-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.802673 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d25739-cdd9-4873-a425-2c5f3951d70d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.802681 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d25739-cdd9-4873-a425-2c5f3951d70d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.802690 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkbsx\" (UniqueName: \"kubernetes.io/projected/13d25739-cdd9-4873-a425-2c5f3951d70d-kube-api-access-tkbsx\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.803389 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-client-ca\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.805224 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-proxy-ca-bundles\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.805735 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-config\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.813462 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d335c45-4b85-41ff-a15e-6666b28ab269-serving-cert\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.820872 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-667d4d966d-qf5k8"] Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.825300 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-667d4d966d-qf5k8"] Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.830548 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4p5c\" (UniqueName: \"kubernetes.io/projected/4d335c45-4b85-41ff-a15e-6666b28ab269-kube-api-access-j4p5c\") pod \"controller-manager-7c5f5c55f4-czxpn\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.837125 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h"] Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.837767 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.844898 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-766f7474f6-qth5h"] Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.932987 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:59:30 crc kubenswrapper[4755]: I0224 09:59:30.987397 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.066442 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.466845 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn"] Feb 24 09:59:31 crc kubenswrapper[4755]: W0224 09:59:31.480847 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d335c45_4b85_41ff_a15e_6666b28ab269.slice/crio-3c2808e5a6a2b8876d4a53832d15b1dc0bf8608000628c0edc839b425cd97506 WatchSource:0}: Error finding container 3c2808e5a6a2b8876d4a53832d15b1dc0bf8608000628c0edc839b425cd97506: Status 404 returned error can't find the container with id 3c2808e5a6a2b8876d4a53832d15b1dc0bf8608000628c0edc839b425cd97506 Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.788191 4755 generic.go:334] "Generic (PLEG): container finished" podID="9fce8bb3-2fc9-496a-b0c0-873427d27571" containerID="3d344a74d1815626ad9ff697fc614646f9cbf0e2683c420b80c98d0a5da8e51c" exitCode=0 Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.788261 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gqk6" event={"ID":"9fce8bb3-2fc9-496a-b0c0-873427d27571","Type":"ContainerDied","Data":"3d344a74d1815626ad9ff697fc614646f9cbf0e2683c420b80c98d0a5da8e51c"} Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.793252 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" containerID="e74066db263cbe180bd8ddb91a49dd0dd23fc01efbf6fae4e947e800e319adcb" exitCode=0 Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.793316 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqjwv" event={"ID":"f4344c16-3181-42d3-9d94-6cccd3fe8cc0","Type":"ContainerDied","Data":"e74066db263cbe180bd8ddb91a49dd0dd23fc01efbf6fae4e947e800e319adcb"} Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.796424 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" event={"ID":"4d335c45-4b85-41ff-a15e-6666b28ab269","Type":"ContainerStarted","Data":"c54ec17530b4e4d038c8b62e61581101d256c67d6ecbc8e94ee0221b5fe8e331"} Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.796457 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" event={"ID":"4d335c45-4b85-41ff-a15e-6666b28ab269","Type":"ContainerStarted","Data":"3c2808e5a6a2b8876d4a53832d15b1dc0bf8608000628c0edc839b425cd97506"} Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.796845 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.798980 4755 generic.go:334] "Generic (PLEG): container finished" podID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" containerID="d5db72f8c93b803d8cd840a0b065acf2c0c6a8863a4e8e985d35e03e47db2ba8" exitCode=0 Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.799051 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjxsm" event={"ID":"f5cfb137-48b9-4d3d-ab93-7f688d450d2a","Type":"ContainerDied","Data":"d5db72f8c93b803d8cd840a0b065acf2c0c6a8863a4e8e985d35e03e47db2ba8"} Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.807224 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.836231 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" podStartSLOduration=3.836209952 podStartE2EDuration="3.836209952s" podCreationTimestamp="2026-02-24 09:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:59:31.83345014 +0000 UTC m=+276.289972683" watchObservedRunningTime="2026-02-24 09:59:31.836209952 +0000 UTC m=+276.292732495" Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.871849 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.872638 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.874171 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.875359 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.899988 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.915199 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/48e88ab9-cdeb-458e-b1ff-5fc96d923829-kubelet-dir\") pod \"installer-9-crc\" (UID: \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.915271 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48e88ab9-cdeb-458e-b1ff-5fc96d923829-kube-api-access\") pod \"installer-9-crc\" (UID: \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:59:31 crc kubenswrapper[4755]: I0224 09:59:31.915316 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/48e88ab9-cdeb-458e-b1ff-5fc96d923829-var-lock\") pod \"installer-9-crc\" (UID: \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.016342 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/48e88ab9-cdeb-458e-b1ff-5fc96d923829-var-lock\") pod \"installer-9-crc\" (UID: \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.016485 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/48e88ab9-cdeb-458e-b1ff-5fc96d923829-var-lock\") pod \"installer-9-crc\" (UID: \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.016495 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/48e88ab9-cdeb-458e-b1ff-5fc96d923829-kubelet-dir\") pod \"installer-9-crc\" (UID: \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.016542 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/48e88ab9-cdeb-458e-b1ff-5fc96d923829-kubelet-dir\") pod \"installer-9-crc\" (UID: \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.016636 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48e88ab9-cdeb-458e-b1ff-5fc96d923829-kube-api-access\") pod \"installer-9-crc\" (UID: \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.047541 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48e88ab9-cdeb-458e-b1ff-5fc96d923829-kube-api-access\") pod \"installer-9-crc\" (UID: \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.185288 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.327473 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d25739-cdd9-4873-a425-2c5f3951d70d" path="/var/lib/kubelet/pods/13d25739-cdd9-4873-a425-2c5f3951d70d/volumes" Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.328315 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e34fbb5a-81ec-442b-8b8d-f078ad03b517" path="/var/lib/kubelet/pods/e34fbb5a-81ec-442b-8b8d-f078ad03b517/volumes" Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.570993 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 09:59:32 crc kubenswrapper[4755]: W0224 09:59:32.581558 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod48e88ab9_cdeb_458e_b1ff_5fc96d923829.slice/crio-02290b6a5d8e98aadb9d3531f883042e49060367982f3339fc20749c8fce5fab WatchSource:0}: Error finding container 02290b6a5d8e98aadb9d3531f883042e49060367982f3339fc20749c8fce5fab: Status 404 returned error can't find the container with id 02290b6a5d8e98aadb9d3531f883042e49060367982f3339fc20749c8fce5fab Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.810414 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjxsm" event={"ID":"f5cfb137-48b9-4d3d-ab93-7f688d450d2a","Type":"ContainerStarted","Data":"5cae7774d67427d10a21539b1c95ce10da150ba853dd1b2552ef7dda704a19f6"} Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.812701 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"48e88ab9-cdeb-458e-b1ff-5fc96d923829","Type":"ContainerStarted","Data":"02290b6a5d8e98aadb9d3531f883042e49060367982f3339fc20749c8fce5fab"} Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.814676 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gqk6" event={"ID":"9fce8bb3-2fc9-496a-b0c0-873427d27571","Type":"ContainerStarted","Data":"500c85f84af8fb83736af6b19c0b282434b665f5e78e7e2270b2ed50bdf6cb22"} Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.817623 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqjwv" event={"ID":"f4344c16-3181-42d3-9d94-6cccd3fe8cc0","Type":"ContainerStarted","Data":"d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4"} Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.834661 4755 generic.go:334] "Generic (PLEG): container finished" podID="2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" containerID="fba14e95440e80e9364929e04927c02930f6c351c23f530704f2b7f449709502" exitCode=0 Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.835302 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gt9z" event={"ID":"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1","Type":"ContainerDied","Data":"fba14e95440e80e9364929e04927c02930f6c351c23f530704f2b7f449709502"} Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.838163 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pjxsm" podStartSLOduration=3.064029952 podStartE2EDuration="40.838146567s" podCreationTimestamp="2026-02-24 09:58:52 +0000 UTC" firstStartedPulling="2026-02-24 09:58:54.406772709 +0000 UTC m=+238.863295252" lastFinishedPulling="2026-02-24 09:59:32.180889324 +0000 UTC m=+276.637411867" observedRunningTime="2026-02-24 09:59:32.833953754 +0000 UTC m=+277.290476297" watchObservedRunningTime="2026-02-24 09:59:32.838146567 +0000 UTC m=+277.294669120" Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.852276 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mqjwv" podStartSLOduration=2.684992456 podStartE2EDuration="43.852255995s" podCreationTimestamp="2026-02-24 09:58:49 +0000 UTC" firstStartedPulling="2026-02-24 09:58:51.226727556 +0000 UTC m=+235.683250099" lastFinishedPulling="2026-02-24 09:59:32.393991105 +0000 UTC m=+276.850513638" observedRunningTime="2026-02-24 09:59:32.850503753 +0000 UTC m=+277.307026296" watchObservedRunningTime="2026-02-24 09:59:32.852255995 +0000 UTC m=+277.308778538" Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.865111 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2tztw"] Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.865317 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2tztw" podUID="d7c94c8e-698e-4c14-ae46-16f03e666f27" containerName="registry-server" containerID="cri-o://52b23cd4399738acbc9a9ef25a598fa59518d4ca84660fe0aa47b96edfbf68ef" gracePeriod=2 Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.870322 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2gqk6" podStartSLOduration=1.775988481 podStartE2EDuration="39.870309469s" podCreationTimestamp="2026-02-24 09:58:53 +0000 UTC" firstStartedPulling="2026-02-24 09:58:54.343505643 +0000 UTC m=+238.800028186" lastFinishedPulling="2026-02-24 09:59:32.437826611 +0000 UTC m=+276.894349174" observedRunningTime="2026-02-24 09:59:32.864134706 +0000 UTC m=+277.320657259" watchObservedRunningTime="2026-02-24 09:59:32.870309469 +0000 UTC m=+277.326832012" Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.979205 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:59:32 crc kubenswrapper[4755]: I0224 09:59:32.979304 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.067436 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x69lf"] Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.067646 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x69lf" podUID="d61d7a2b-f731-4982-bab4-8c6db2c8c963" containerName="registry-server" containerID="cri-o://cfed8176e9cd19acdb935f513bc3bbfd3deecb488c2c5e241e1e78515b6d4045" gracePeriod=2 Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.218947 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.230981 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c94c8e-698e-4c14-ae46-16f03e666f27-utilities\") pod \"d7c94c8e-698e-4c14-ae46-16f03e666f27\" (UID: \"d7c94c8e-698e-4c14-ae46-16f03e666f27\") " Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.231042 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhccj\" (UniqueName: \"kubernetes.io/projected/d7c94c8e-698e-4c14-ae46-16f03e666f27-kube-api-access-fhccj\") pod \"d7c94c8e-698e-4c14-ae46-16f03e666f27\" (UID: \"d7c94c8e-698e-4c14-ae46-16f03e666f27\") " Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.231174 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c94c8e-698e-4c14-ae46-16f03e666f27-catalog-content\") pod \"d7c94c8e-698e-4c14-ae46-16f03e666f27\" (UID: \"d7c94c8e-698e-4c14-ae46-16f03e666f27\") " Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.232200 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7c94c8e-698e-4c14-ae46-16f03e666f27-utilities" (OuterVolumeSpecName: "utilities") pod "d7c94c8e-698e-4c14-ae46-16f03e666f27" (UID: "d7c94c8e-698e-4c14-ae46-16f03e666f27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.241327 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c94c8e-698e-4c14-ae46-16f03e666f27-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.271635 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c94c8e-698e-4c14-ae46-16f03e666f27-kube-api-access-fhccj" (OuterVolumeSpecName: "kube-api-access-fhccj") pod "d7c94c8e-698e-4c14-ae46-16f03e666f27" (UID: "d7c94c8e-698e-4c14-ae46-16f03e666f27"). InnerVolumeSpecName "kube-api-access-fhccj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.286004 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7c94c8e-698e-4c14-ae46-16f03e666f27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7c94c8e-698e-4c14-ae46-16f03e666f27" (UID: "d7c94c8e-698e-4c14-ae46-16f03e666f27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.342703 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhccj\" (UniqueName: \"kubernetes.io/projected/d7c94c8e-698e-4c14-ae46-16f03e666f27-kube-api-access-fhccj\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.342733 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c94c8e-698e-4c14-ae46-16f03e666f27-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.462638 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj"] Feb 24 09:59:33 crc kubenswrapper[4755]: E0224 09:59:33.463005 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c94c8e-698e-4c14-ae46-16f03e666f27" containerName="registry-server" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.463108 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c94c8e-698e-4c14-ae46-16f03e666f27" containerName="registry-server" Feb 24 09:59:33 crc kubenswrapper[4755]: E0224 09:59:33.463182 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c94c8e-698e-4c14-ae46-16f03e666f27" containerName="extract-content" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.463244 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c94c8e-698e-4c14-ae46-16f03e666f27" containerName="extract-content" Feb 24 09:59:33 crc kubenswrapper[4755]: E0224 09:59:33.463304 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c94c8e-698e-4c14-ae46-16f03e666f27" containerName="extract-utilities" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.463354 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c94c8e-698e-4c14-ae46-16f03e666f27" containerName="extract-utilities" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.463511 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c94c8e-698e-4c14-ae46-16f03e666f27" containerName="registry-server" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.463918 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:33 crc kubenswrapper[4755]: W0224 09:59:33.465549 4755 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 24 09:59:33 crc kubenswrapper[4755]: W0224 09:59:33.465562 4755 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 24 09:59:33 crc kubenswrapper[4755]: E0224 09:59:33.465582 4755 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:59:33 crc kubenswrapper[4755]: E0224 09:59:33.465604 4755 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:59:33 crc kubenswrapper[4755]: W0224 09:59:33.465986 4755 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 24 09:59:33 crc kubenswrapper[4755]: E0224 09:59:33.466024 4755 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:59:33 crc kubenswrapper[4755]: W0224 09:59:33.466096 4755 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 24 09:59:33 crc kubenswrapper[4755]: E0224 09:59:33.466110 4755 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:59:33 crc kubenswrapper[4755]: W0224 09:59:33.466158 4755 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 24 09:59:33 crc kubenswrapper[4755]: E0224 09:59:33.466169 4755 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:59:33 crc kubenswrapper[4755]: W0224 09:59:33.466195 4755 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 24 09:59:33 crc kubenswrapper[4755]: E0224 09:59:33.466205 4755 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.491515 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.491558 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:59:33 crc kubenswrapper[4755]: I0224 09:59:33.505533 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj"] Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.223383 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-pjxsm" podUID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" containerName="registry-server" probeResult="failure" output=< Feb 24 09:59:34 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Feb 24 09:59:34 crc kubenswrapper[4755]: > Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.253868 4755 generic.go:334] "Generic (PLEG): container finished" podID="d7c94c8e-698e-4c14-ae46-16f03e666f27" containerID="52b23cd4399738acbc9a9ef25a598fa59518d4ca84660fe0aa47b96edfbf68ef" exitCode=0 Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.253933 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tztw" event={"ID":"d7c94c8e-698e-4c14-ae46-16f03e666f27","Type":"ContainerDied","Data":"52b23cd4399738acbc9a9ef25a598fa59518d4ca84660fe0aa47b96edfbf68ef"} Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.253967 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tztw" event={"ID":"d7c94c8e-698e-4c14-ae46-16f03e666f27","Type":"ContainerDied","Data":"22fdb7e186e52f8f476ae63fec4c92e4333ddc9a5669d0189ea1ea31bee4f3f3"} Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.253988 4755 scope.go:117] "RemoveContainer" containerID="52b23cd4399738acbc9a9ef25a598fa59518d4ca84660fe0aa47b96edfbf68ef" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.254158 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tztw" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.265336 4755 generic.go:334] "Generic (PLEG): container finished" podID="d61d7a2b-f731-4982-bab4-8c6db2c8c963" containerID="cfed8176e9cd19acdb935f513bc3bbfd3deecb488c2c5e241e1e78515b6d4045" exitCode=0 Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.265411 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x69lf" event={"ID":"d61d7a2b-f731-4982-bab4-8c6db2c8c963","Type":"ContainerDied","Data":"cfed8176e9cd19acdb935f513bc3bbfd3deecb488c2c5e241e1e78515b6d4045"} Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.267736 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"48e88ab9-cdeb-458e-b1ff-5fc96d923829","Type":"ContainerStarted","Data":"32d5935fbce34be3dca711696b778f971fc05581131fa8bdb137141133f9683c"} Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.272664 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.293435 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.293414317 podStartE2EDuration="3.293414317s" podCreationTimestamp="2026-02-24 09:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:59:34.292370217 +0000 UTC m=+278.748892760" watchObservedRunningTime="2026-02-24 09:59:34.293414317 +0000 UTC m=+278.749936860" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.303138 4755 scope.go:117] "RemoveContainer" containerID="c64e85b262a640cff3148dc9e0c1f139a3bd48fbe74dc342c5e0a7b54e49751c" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.318607 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24hfq\" (UniqueName: \"kubernetes.io/projected/a8bce638-dac9-429f-96fc-026923ee4c8e-kube-api-access-24hfq\") pod \"route-controller-manager-7b46b9c4c-8xthj\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.318720 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8bce638-dac9-429f-96fc-026923ee4c8e-config\") pod \"route-controller-manager-7b46b9c4c-8xthj\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.318810 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8bce638-dac9-429f-96fc-026923ee4c8e-client-ca\") pod \"route-controller-manager-7b46b9c4c-8xthj\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.318927 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8bce638-dac9-429f-96fc-026923ee4c8e-serving-cert\") pod \"route-controller-manager-7b46b9c4c-8xthj\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.339470 4755 scope.go:117] "RemoveContainer" containerID="b69a403d03bb0ab7f387d7d7eb3663a58165f0edf243ff9ffe4a118789e4b19e" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.374164 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2tztw"] Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.374198 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2tztw"] Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.374275 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.393342 4755 scope.go:117] "RemoveContainer" containerID="52b23cd4399738acbc9a9ef25a598fa59518d4ca84660fe0aa47b96edfbf68ef" Feb 24 09:59:34 crc kubenswrapper[4755]: E0224 09:59:34.394001 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b23cd4399738acbc9a9ef25a598fa59518d4ca84660fe0aa47b96edfbf68ef\": container with ID starting with 52b23cd4399738acbc9a9ef25a598fa59518d4ca84660fe0aa47b96edfbf68ef not found: ID does not exist" containerID="52b23cd4399738acbc9a9ef25a598fa59518d4ca84660fe0aa47b96edfbf68ef" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.394028 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b23cd4399738acbc9a9ef25a598fa59518d4ca84660fe0aa47b96edfbf68ef"} err="failed to get container status \"52b23cd4399738acbc9a9ef25a598fa59518d4ca84660fe0aa47b96edfbf68ef\": rpc error: code = NotFound desc = could not find container \"52b23cd4399738acbc9a9ef25a598fa59518d4ca84660fe0aa47b96edfbf68ef\": container with ID starting with 52b23cd4399738acbc9a9ef25a598fa59518d4ca84660fe0aa47b96edfbf68ef not found: ID does not exist" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.394046 4755 scope.go:117] "RemoveContainer" containerID="c64e85b262a640cff3148dc9e0c1f139a3bd48fbe74dc342c5e0a7b54e49751c" Feb 24 09:59:34 crc kubenswrapper[4755]: E0224 09:59:34.394524 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c64e85b262a640cff3148dc9e0c1f139a3bd48fbe74dc342c5e0a7b54e49751c\": container with ID starting with c64e85b262a640cff3148dc9e0c1f139a3bd48fbe74dc342c5e0a7b54e49751c not found: ID does not exist" containerID="c64e85b262a640cff3148dc9e0c1f139a3bd48fbe74dc342c5e0a7b54e49751c" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.394547 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c64e85b262a640cff3148dc9e0c1f139a3bd48fbe74dc342c5e0a7b54e49751c"} err="failed to get container status \"c64e85b262a640cff3148dc9e0c1f139a3bd48fbe74dc342c5e0a7b54e49751c\": rpc error: code = NotFound desc = could not find container \"c64e85b262a640cff3148dc9e0c1f139a3bd48fbe74dc342c5e0a7b54e49751c\": container with ID starting with c64e85b262a640cff3148dc9e0c1f139a3bd48fbe74dc342c5e0a7b54e49751c not found: ID does not exist" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.394562 4755 scope.go:117] "RemoveContainer" containerID="b69a403d03bb0ab7f387d7d7eb3663a58165f0edf243ff9ffe4a118789e4b19e" Feb 24 09:59:34 crc kubenswrapper[4755]: E0224 09:59:34.394850 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b69a403d03bb0ab7f387d7d7eb3663a58165f0edf243ff9ffe4a118789e4b19e\": container with ID starting with b69a403d03bb0ab7f387d7d7eb3663a58165f0edf243ff9ffe4a118789e4b19e not found: ID does not exist" containerID="b69a403d03bb0ab7f387d7d7eb3663a58165f0edf243ff9ffe4a118789e4b19e" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.394875 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b69a403d03bb0ab7f387d7d7eb3663a58165f0edf243ff9ffe4a118789e4b19e"} err="failed to get container status \"b69a403d03bb0ab7f387d7d7eb3663a58165f0edf243ff9ffe4a118789e4b19e\": rpc error: code = NotFound desc = could not find container \"b69a403d03bb0ab7f387d7d7eb3663a58165f0edf243ff9ffe4a118789e4b19e\": container with ID starting with b69a403d03bb0ab7f387d7d7eb3663a58165f0edf243ff9ffe4a118789e4b19e not found: ID does not exist" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.419989 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8bce638-dac9-429f-96fc-026923ee4c8e-serving-cert\") pod \"route-controller-manager-7b46b9c4c-8xthj\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.420122 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24hfq\" (UniqueName: \"kubernetes.io/projected/a8bce638-dac9-429f-96fc-026923ee4c8e-kube-api-access-24hfq\") pod \"route-controller-manager-7b46b9c4c-8xthj\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.420207 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8bce638-dac9-429f-96fc-026923ee4c8e-config\") pod \"route-controller-manager-7b46b9c4c-8xthj\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.420275 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8bce638-dac9-429f-96fc-026923ee4c8e-client-ca\") pod \"route-controller-manager-7b46b9c4c-8xthj\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.475347 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.543249 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.693818 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.714443 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.722996 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61d7a2b-f731-4982-bab4-8c6db2c8c963-utilities\") pod \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\" (UID: \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\") " Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.723122 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmr56\" (UniqueName: \"kubernetes.io/projected/d61d7a2b-f731-4982-bab4-8c6db2c8c963-kube-api-access-vmr56\") pod \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\" (UID: \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\") " Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.723168 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61d7a2b-f731-4982-bab4-8c6db2c8c963-catalog-content\") pod \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\" (UID: \"d61d7a2b-f731-4982-bab4-8c6db2c8c963\") " Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.723669 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d61d7a2b-f731-4982-bab4-8c6db2c8c963-utilities" (OuterVolumeSpecName: "utilities") pod "d61d7a2b-f731-4982-bab4-8c6db2c8c963" (UID: "d61d7a2b-f731-4982-bab4-8c6db2c8c963"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.723797 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8bce638-dac9-429f-96fc-026923ee4c8e-client-ca\") pod \"route-controller-manager-7b46b9c4c-8xthj\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.727588 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61d7a2b-f731-4982-bab4-8c6db2c8c963-kube-api-access-vmr56" (OuterVolumeSpecName: "kube-api-access-vmr56") pod "d61d7a2b-f731-4982-bab4-8c6db2c8c963" (UID: "d61d7a2b-f731-4982-bab4-8c6db2c8c963"). InnerVolumeSpecName "kube-api-access-vmr56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.778021 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d61d7a2b-f731-4982-bab4-8c6db2c8c963-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d61d7a2b-f731-4982-bab4-8c6db2c8c963" (UID: "d61d7a2b-f731-4982-bab4-8c6db2c8c963"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.824758 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmr56\" (UniqueName: \"kubernetes.io/projected/d61d7a2b-f731-4982-bab4-8c6db2c8c963-kube-api-access-vmr56\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.824794 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61d7a2b-f731-4982-bab4-8c6db2c8c963-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.824804 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61d7a2b-f731-4982-bab4-8c6db2c8c963-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.846678 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.852773 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8bce638-dac9-429f-96fc-026923ee4c8e-config\") pod \"route-controller-manager-7b46b9c4c-8xthj\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.963892 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.972140 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.973360 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24hfq\" (UniqueName: \"kubernetes.io/projected/a8bce638-dac9-429f-96fc-026923ee4c8e-kube-api-access-24hfq\") pod \"route-controller-manager-7b46b9c4c-8xthj\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:34 crc kubenswrapper[4755]: I0224 09:59:34.985196 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8bce638-dac9-429f-96fc-026923ee4c8e-serving-cert\") pod \"route-controller-manager-7b46b9c4c-8xthj\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:35 crc kubenswrapper[4755]: I0224 09:59:35.205804 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:35 crc kubenswrapper[4755]: I0224 09:59:35.263587 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2gqk6" podUID="9fce8bb3-2fc9-496a-b0c0-873427d27571" containerName="registry-server" probeResult="failure" output=< Feb 24 09:59:35 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Feb 24 09:59:35 crc kubenswrapper[4755]: > Feb 24 09:59:35 crc kubenswrapper[4755]: I0224 09:59:35.278120 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gt9z" event={"ID":"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1","Type":"ContainerStarted","Data":"ec36d0d4564973440b1cc7faf41dd06f37e4ea1f41a7ceff9c14832c24043f49"} Feb 24 09:59:35 crc kubenswrapper[4755]: I0224 09:59:35.286618 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x69lf" event={"ID":"d61d7a2b-f731-4982-bab4-8c6db2c8c963","Type":"ContainerDied","Data":"55024261f62a1e1ecd9980b56a1a21478076919a5afd52fcdc9a177ae8589c07"} Feb 24 09:59:35 crc kubenswrapper[4755]: I0224 09:59:35.286775 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x69lf" Feb 24 09:59:35 crc kubenswrapper[4755]: I0224 09:59:35.287042 4755 scope.go:117] "RemoveContainer" containerID="cfed8176e9cd19acdb935f513bc3bbfd3deecb488c2c5e241e1e78515b6d4045" Feb 24 09:59:35 crc kubenswrapper[4755]: I0224 09:59:35.330112 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6gt9z" podStartSLOduration=3.31130695 podStartE2EDuration="43.33009331s" podCreationTimestamp="2026-02-24 09:58:52 +0000 UTC" firstStartedPulling="2026-02-24 09:58:53.317838657 +0000 UTC m=+237.774361200" lastFinishedPulling="2026-02-24 09:59:33.336625017 +0000 UTC m=+277.793147560" observedRunningTime="2026-02-24 09:59:35.30439148 +0000 UTC m=+279.760914023" watchObservedRunningTime="2026-02-24 09:59:35.33009331 +0000 UTC m=+279.786615853" Feb 24 09:59:35 crc kubenswrapper[4755]: I0224 09:59:35.331448 4755 scope.go:117] "RemoveContainer" containerID="e63839b45457eb18a9c327d85b96f64561e38c1a39d0ab8677b0f9349a84360e" Feb 24 09:59:35 crc kubenswrapper[4755]: I0224 09:59:35.333118 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x69lf"] Feb 24 09:59:35 crc kubenswrapper[4755]: I0224 09:59:35.335738 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x69lf"] Feb 24 09:59:35 crc kubenswrapper[4755]: I0224 09:59:35.358315 4755 scope.go:117] "RemoveContainer" containerID="3d7d68159aa6437c8a9ed9e53dbca395f43feb7a919b6d09bb422e1a4114c7a5" Feb 24 09:59:35 crc kubenswrapper[4755]: I0224 09:59:35.465885 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7l52t"] Feb 24 09:59:35 crc kubenswrapper[4755]: I0224 09:59:35.672202 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj"] Feb 24 09:59:36 crc kubenswrapper[4755]: I0224 09:59:36.299774 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7l52t" podUID="f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" containerName="registry-server" containerID="cri-o://b257035a3d896755e4e98e4a6b39b7b88b15862195f51af189948207a37adc95" gracePeriod=2 Feb 24 09:59:36 crc kubenswrapper[4755]: I0224 09:59:36.300183 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" event={"ID":"a8bce638-dac9-429f-96fc-026923ee4c8e","Type":"ContainerStarted","Data":"19375b6a1e66e1eaff8df34f0ea3d6d777e9e62713d87c5d5c9e8dfe392842c7"} Feb 24 09:59:36 crc kubenswrapper[4755]: I0224 09:59:36.300230 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" event={"ID":"a8bce638-dac9-429f-96fc-026923ee4c8e","Type":"ContainerStarted","Data":"896baaeecd1f3b7f53efe5bbc847f1a58d8efe0155301a28782465be021a00ea"} Feb 24 09:59:36 crc kubenswrapper[4755]: I0224 09:59:36.300809 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:36 crc kubenswrapper[4755]: I0224 09:59:36.324968 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d61d7a2b-f731-4982-bab4-8c6db2c8c963" path="/var/lib/kubelet/pods/d61d7a2b-f731-4982-bab4-8c6db2c8c963/volumes" Feb 24 09:59:36 crc kubenswrapper[4755]: I0224 09:59:36.325560 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7c94c8e-698e-4c14-ae46-16f03e666f27" path="/var/lib/kubelet/pods/d7c94c8e-698e-4c14-ae46-16f03e666f27/volumes" Feb 24 09:59:36 crc kubenswrapper[4755]: I0224 09:59:36.327502 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" podStartSLOduration=8.327487981 podStartE2EDuration="8.327487981s" podCreationTimestamp="2026-02-24 09:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:59:36.319703751 +0000 UTC m=+280.776226294" watchObservedRunningTime="2026-02-24 09:59:36.327487981 +0000 UTC m=+280.784010524" Feb 24 09:59:36 crc kubenswrapper[4755]: I0224 09:59:36.574538 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:37 crc kubenswrapper[4755]: I0224 09:59:37.324177 4755 generic.go:334] "Generic (PLEG): container finished" podID="f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" containerID="b257035a3d896755e4e98e4a6b39b7b88b15862195f51af189948207a37adc95" exitCode=0 Feb 24 09:59:37 crc kubenswrapper[4755]: I0224 09:59:37.324289 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7l52t" event={"ID":"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa","Type":"ContainerDied","Data":"b257035a3d896755e4e98e4a6b39b7b88b15862195f51af189948207a37adc95"} Feb 24 09:59:37 crc kubenswrapper[4755]: I0224 09:59:37.454760 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:59:37 crc kubenswrapper[4755]: I0224 09:59:37.563275 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-utilities\") pod \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\" (UID: \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\") " Feb 24 09:59:37 crc kubenswrapper[4755]: I0224 09:59:37.563376 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-catalog-content\") pod \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\" (UID: \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\") " Feb 24 09:59:37 crc kubenswrapper[4755]: I0224 09:59:37.563574 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g269s\" (UniqueName: \"kubernetes.io/projected/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-kube-api-access-g269s\") pod \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\" (UID: \"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa\") " Feb 24 09:59:37 crc kubenswrapper[4755]: I0224 09:59:37.564342 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-utilities" (OuterVolumeSpecName: "utilities") pod "f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" (UID: "f634ec6f-7ee6-4789-bc6a-e860f86fe6fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:59:37 crc kubenswrapper[4755]: I0224 09:59:37.565352 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:37 crc kubenswrapper[4755]: I0224 09:59:37.572295 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-kube-api-access-g269s" (OuterVolumeSpecName: "kube-api-access-g269s") pod "f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" (UID: "f634ec6f-7ee6-4789-bc6a-e860f86fe6fa"). InnerVolumeSpecName "kube-api-access-g269s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:59:37 crc kubenswrapper[4755]: I0224 09:59:37.666582 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g269s\" (UniqueName: \"kubernetes.io/projected/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-kube-api-access-g269s\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:37 crc kubenswrapper[4755]: I0224 09:59:37.689539 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" (UID: "f634ec6f-7ee6-4789-bc6a-e860f86fe6fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:59:37 crc kubenswrapper[4755]: I0224 09:59:37.767579 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:38 crc kubenswrapper[4755]: I0224 09:59:38.335146 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7l52t" event={"ID":"f634ec6f-7ee6-4789-bc6a-e860f86fe6fa","Type":"ContainerDied","Data":"41610d61058d9ce15290b6403f3559002df78d9cf09b8c8cc20dadc5c85ec5d3"} Feb 24 09:59:38 crc kubenswrapper[4755]: I0224 09:59:38.335264 4755 scope.go:117] "RemoveContainer" containerID="b257035a3d896755e4e98e4a6b39b7b88b15862195f51af189948207a37adc95" Feb 24 09:59:38 crc kubenswrapper[4755]: I0224 09:59:38.335272 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7l52t" Feb 24 09:59:38 crc kubenswrapper[4755]: I0224 09:59:38.362202 4755 scope.go:117] "RemoveContainer" containerID="8d5f3021dc7e21efba2be5ff4cd7c1eef04b360380c65c20628367a0b049f456" Feb 24 09:59:38 crc kubenswrapper[4755]: I0224 09:59:38.382216 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7l52t"] Feb 24 09:59:38 crc kubenswrapper[4755]: I0224 09:59:38.387603 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7l52t"] Feb 24 09:59:38 crc kubenswrapper[4755]: I0224 09:59:38.402822 4755 scope.go:117] "RemoveContainer" containerID="8ea0c820e9ba896723fa93e4127eaf06240aa536f0e5c0b8b1310c0f6a537b42" Feb 24 09:59:40 crc kubenswrapper[4755]: I0224 09:59:40.281719 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:59:40 crc kubenswrapper[4755]: I0224 09:59:40.282713 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:59:40 crc kubenswrapper[4755]: I0224 09:59:40.325179 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" path="/var/lib/kubelet/pods/f634ec6f-7ee6-4789-bc6a-e860f86fe6fa/volumes" Feb 24 09:59:40 crc kubenswrapper[4755]: I0224 09:59:40.361091 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:59:41 crc kubenswrapper[4755]: I0224 09:59:41.424839 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mqjwv" Feb 24 09:59:42 crc kubenswrapper[4755]: I0224 09:59:42.478187 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:59:42 crc kubenswrapper[4755]: I0224 09:59:42.478253 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:59:42 crc kubenswrapper[4755]: I0224 09:59:42.528899 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:59:43 crc kubenswrapper[4755]: I0224 09:59:43.023980 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:59:43 crc kubenswrapper[4755]: I0224 09:59:43.066700 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:59:43 crc kubenswrapper[4755]: I0224 09:59:43.441842 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 09:59:43 crc kubenswrapper[4755]: I0224 09:59:43.560453 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:59:43 crc kubenswrapper[4755]: I0224 09:59:43.614436 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 09:59:43 crc kubenswrapper[4755]: I0224 09:59:43.866143 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjxsm"] Feb 24 09:59:44 crc kubenswrapper[4755]: I0224 09:59:44.375285 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pjxsm" podUID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" containerName="registry-server" containerID="cri-o://5cae7774d67427d10a21539b1c95ce10da150ba853dd1b2552ef7dda704a19f6" gracePeriod=2 Feb 24 09:59:44 crc kubenswrapper[4755]: I0224 09:59:44.853238 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:59:44 crc kubenswrapper[4755]: I0224 09:59:44.896561 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-catalog-content\") pod \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\" (UID: \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\") " Feb 24 09:59:44 crc kubenswrapper[4755]: I0224 09:59:44.897501 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6kkn\" (UniqueName: \"kubernetes.io/projected/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-kube-api-access-t6kkn\") pod \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\" (UID: \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\") " Feb 24 09:59:44 crc kubenswrapper[4755]: I0224 09:59:44.897802 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-utilities\") pod \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\" (UID: \"f5cfb137-48b9-4d3d-ab93-7f688d450d2a\") " Feb 24 09:59:44 crc kubenswrapper[4755]: I0224 09:59:44.899843 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-utilities" (OuterVolumeSpecName: "utilities") pod "f5cfb137-48b9-4d3d-ab93-7f688d450d2a" (UID: "f5cfb137-48b9-4d3d-ab93-7f688d450d2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:59:44 crc kubenswrapper[4755]: I0224 09:59:44.910601 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-kube-api-access-t6kkn" (OuterVolumeSpecName: "kube-api-access-t6kkn") pod "f5cfb137-48b9-4d3d-ab93-7f688d450d2a" (UID: "f5cfb137-48b9-4d3d-ab93-7f688d450d2a"). InnerVolumeSpecName "kube-api-access-t6kkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:59:44 crc kubenswrapper[4755]: I0224 09:59:44.933550 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5cfb137-48b9-4d3d-ab93-7f688d450d2a" (UID: "f5cfb137-48b9-4d3d-ab93-7f688d450d2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.000456 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6kkn\" (UniqueName: \"kubernetes.io/projected/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-kube-api-access-t6kkn\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.000522 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.000542 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5cfb137-48b9-4d3d-ab93-7f688d450d2a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.385346 4755 generic.go:334] "Generic (PLEG): container finished" podID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" containerID="5cae7774d67427d10a21539b1c95ce10da150ba853dd1b2552ef7dda704a19f6" exitCode=0 Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.385536 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjxsm" Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.386322 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjxsm" event={"ID":"f5cfb137-48b9-4d3d-ab93-7f688d450d2a","Type":"ContainerDied","Data":"5cae7774d67427d10a21539b1c95ce10da150ba853dd1b2552ef7dda704a19f6"} Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.386400 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjxsm" event={"ID":"f5cfb137-48b9-4d3d-ab93-7f688d450d2a","Type":"ContainerDied","Data":"063280ac5adc4dedfb7b645bf15e6bb12fc387bcc33392be5dc53332f3353b13"} Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.386435 4755 scope.go:117] "RemoveContainer" containerID="5cae7774d67427d10a21539b1c95ce10da150ba853dd1b2552ef7dda704a19f6" Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.409906 4755 scope.go:117] "RemoveContainer" containerID="d5db72f8c93b803d8cd840a0b065acf2c0c6a8863a4e8e985d35e03e47db2ba8" Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.433950 4755 scope.go:117] "RemoveContainer" containerID="c3d2011ef31ca9d7ec4c0331a97664096a2ec12cb35a4ba415254ab5ff16b134" Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.434131 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjxsm"] Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.437257 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjxsm"] Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.453483 4755 scope.go:117] "RemoveContainer" containerID="5cae7774d67427d10a21539b1c95ce10da150ba853dd1b2552ef7dda704a19f6" Feb 24 09:59:45 crc kubenswrapper[4755]: E0224 09:59:45.454139 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cae7774d67427d10a21539b1c95ce10da150ba853dd1b2552ef7dda704a19f6\": container with ID starting with 5cae7774d67427d10a21539b1c95ce10da150ba853dd1b2552ef7dda704a19f6 not found: ID does not exist" containerID="5cae7774d67427d10a21539b1c95ce10da150ba853dd1b2552ef7dda704a19f6" Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.454180 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cae7774d67427d10a21539b1c95ce10da150ba853dd1b2552ef7dda704a19f6"} err="failed to get container status \"5cae7774d67427d10a21539b1c95ce10da150ba853dd1b2552ef7dda704a19f6\": rpc error: code = NotFound desc = could not find container \"5cae7774d67427d10a21539b1c95ce10da150ba853dd1b2552ef7dda704a19f6\": container with ID starting with 5cae7774d67427d10a21539b1c95ce10da150ba853dd1b2552ef7dda704a19f6 not found: ID does not exist" Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.454206 4755 scope.go:117] "RemoveContainer" containerID="d5db72f8c93b803d8cd840a0b065acf2c0c6a8863a4e8e985d35e03e47db2ba8" Feb 24 09:59:45 crc kubenswrapper[4755]: E0224 09:59:45.454591 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5db72f8c93b803d8cd840a0b065acf2c0c6a8863a4e8e985d35e03e47db2ba8\": container with ID starting with d5db72f8c93b803d8cd840a0b065acf2c0c6a8863a4e8e985d35e03e47db2ba8 not found: ID does not exist" containerID="d5db72f8c93b803d8cd840a0b065acf2c0c6a8863a4e8e985d35e03e47db2ba8" Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.454628 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5db72f8c93b803d8cd840a0b065acf2c0c6a8863a4e8e985d35e03e47db2ba8"} err="failed to get container status \"d5db72f8c93b803d8cd840a0b065acf2c0c6a8863a4e8e985d35e03e47db2ba8\": rpc error: code = NotFound desc = could not find container \"d5db72f8c93b803d8cd840a0b065acf2c0c6a8863a4e8e985d35e03e47db2ba8\": container with ID starting with d5db72f8c93b803d8cd840a0b065acf2c0c6a8863a4e8e985d35e03e47db2ba8 not found: ID does not exist" Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.454654 4755 scope.go:117] "RemoveContainer" containerID="c3d2011ef31ca9d7ec4c0331a97664096a2ec12cb35a4ba415254ab5ff16b134" Feb 24 09:59:45 crc kubenswrapper[4755]: E0224 09:59:45.454966 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3d2011ef31ca9d7ec4c0331a97664096a2ec12cb35a4ba415254ab5ff16b134\": container with ID starting with c3d2011ef31ca9d7ec4c0331a97664096a2ec12cb35a4ba415254ab5ff16b134 not found: ID does not exist" containerID="c3d2011ef31ca9d7ec4c0331a97664096a2ec12cb35a4ba415254ab5ff16b134" Feb 24 09:59:45 crc kubenswrapper[4755]: I0224 09:59:45.455103 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3d2011ef31ca9d7ec4c0331a97664096a2ec12cb35a4ba415254ab5ff16b134"} err="failed to get container status \"c3d2011ef31ca9d7ec4c0331a97664096a2ec12cb35a4ba415254ab5ff16b134\": rpc error: code = NotFound desc = could not find container \"c3d2011ef31ca9d7ec4c0331a97664096a2ec12cb35a4ba415254ab5ff16b134\": container with ID starting with c3d2011ef31ca9d7ec4c0331a97664096a2ec12cb35a4ba415254ab5ff16b134 not found: ID does not exist" Feb 24 09:59:46 crc kubenswrapper[4755]: I0224 09:59:46.329393 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" path="/var/lib/kubelet/pods/f5cfb137-48b9-4d3d-ab93-7f688d450d2a/volumes" Feb 24 09:59:46 crc kubenswrapper[4755]: I0224 09:59:46.414928 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44264: no serving certificate available for the kubelet" Feb 24 09:59:48 crc kubenswrapper[4755]: I0224 09:59:48.810229 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn"] Feb 24 09:59:48 crc kubenswrapper[4755]: I0224 09:59:48.810778 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" podUID="4d335c45-4b85-41ff-a15e-6666b28ab269" containerName="controller-manager" containerID="cri-o://c54ec17530b4e4d038c8b62e61581101d256c67d6ecbc8e94ee0221b5fe8e331" gracePeriod=30 Feb 24 09:59:48 crc kubenswrapper[4755]: I0224 09:59:48.825205 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj"] Feb 24 09:59:48 crc kubenswrapper[4755]: I0224 09:59:48.825446 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" podUID="a8bce638-dac9-429f-96fc-026923ee4c8e" containerName="route-controller-manager" containerID="cri-o://19375b6a1e66e1eaff8df34f0ea3d6d777e9e62713d87c5d5c9e8dfe392842c7" gracePeriod=30 Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.316908 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.386360 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8bce638-dac9-429f-96fc-026923ee4c8e-client-ca\") pod \"a8bce638-dac9-429f-96fc-026923ee4c8e\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.386820 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8bce638-dac9-429f-96fc-026923ee4c8e-config\") pod \"a8bce638-dac9-429f-96fc-026923ee4c8e\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.387000 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24hfq\" (UniqueName: \"kubernetes.io/projected/a8bce638-dac9-429f-96fc-026923ee4c8e-kube-api-access-24hfq\") pod \"a8bce638-dac9-429f-96fc-026923ee4c8e\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.387227 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8bce638-dac9-429f-96fc-026923ee4c8e-serving-cert\") pod \"a8bce638-dac9-429f-96fc-026923ee4c8e\" (UID: \"a8bce638-dac9-429f-96fc-026923ee4c8e\") " Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.387333 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8bce638-dac9-429f-96fc-026923ee4c8e-client-ca" (OuterVolumeSpecName: "client-ca") pod "a8bce638-dac9-429f-96fc-026923ee4c8e" (UID: "a8bce638-dac9-429f-96fc-026923ee4c8e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.387421 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8bce638-dac9-429f-96fc-026923ee4c8e-config" (OuterVolumeSpecName: "config") pod "a8bce638-dac9-429f-96fc-026923ee4c8e" (UID: "a8bce638-dac9-429f-96fc-026923ee4c8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.387971 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8bce638-dac9-429f-96fc-026923ee4c8e-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.388145 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8bce638-dac9-429f-96fc-026923ee4c8e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.391973 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8bce638-dac9-429f-96fc-026923ee4c8e-kube-api-access-24hfq" (OuterVolumeSpecName: "kube-api-access-24hfq") pod "a8bce638-dac9-429f-96fc-026923ee4c8e" (UID: "a8bce638-dac9-429f-96fc-026923ee4c8e"). InnerVolumeSpecName "kube-api-access-24hfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.392373 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8bce638-dac9-429f-96fc-026923ee4c8e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a8bce638-dac9-429f-96fc-026923ee4c8e" (UID: "a8bce638-dac9-429f-96fc-026923ee4c8e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.395993 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.420287 4755 generic.go:334] "Generic (PLEG): container finished" podID="a8bce638-dac9-429f-96fc-026923ee4c8e" containerID="19375b6a1e66e1eaff8df34f0ea3d6d777e9e62713d87c5d5c9e8dfe392842c7" exitCode=0 Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.420349 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.420355 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" event={"ID":"a8bce638-dac9-429f-96fc-026923ee4c8e","Type":"ContainerDied","Data":"19375b6a1e66e1eaff8df34f0ea3d6d777e9e62713d87c5d5c9e8dfe392842c7"} Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.420496 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj" event={"ID":"a8bce638-dac9-429f-96fc-026923ee4c8e","Type":"ContainerDied","Data":"896baaeecd1f3b7f53efe5bbc847f1a58d8efe0155301a28782465be021a00ea"} Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.420538 4755 scope.go:117] "RemoveContainer" containerID="19375b6a1e66e1eaff8df34f0ea3d6d777e9e62713d87c5d5c9e8dfe392842c7" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.422942 4755 generic.go:334] "Generic (PLEG): container finished" podID="4d335c45-4b85-41ff-a15e-6666b28ab269" containerID="c54ec17530b4e4d038c8b62e61581101d256c67d6ecbc8e94ee0221b5fe8e331" exitCode=0 Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.422983 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" event={"ID":"4d335c45-4b85-41ff-a15e-6666b28ab269","Type":"ContainerDied","Data":"c54ec17530b4e4d038c8b62e61581101d256c67d6ecbc8e94ee0221b5fe8e331"} Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.423007 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" event={"ID":"4d335c45-4b85-41ff-a15e-6666b28ab269","Type":"ContainerDied","Data":"3c2808e5a6a2b8876d4a53832d15b1dc0bf8608000628c0edc839b425cd97506"} Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.423015 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.443415 4755 scope.go:117] "RemoveContainer" containerID="19375b6a1e66e1eaff8df34f0ea3d6d777e9e62713d87c5d5c9e8dfe392842c7" Feb 24 09:59:49 crc kubenswrapper[4755]: E0224 09:59:49.443968 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19375b6a1e66e1eaff8df34f0ea3d6d777e9e62713d87c5d5c9e8dfe392842c7\": container with ID starting with 19375b6a1e66e1eaff8df34f0ea3d6d777e9e62713d87c5d5c9e8dfe392842c7 not found: ID does not exist" containerID="19375b6a1e66e1eaff8df34f0ea3d6d777e9e62713d87c5d5c9e8dfe392842c7" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.443997 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19375b6a1e66e1eaff8df34f0ea3d6d777e9e62713d87c5d5c9e8dfe392842c7"} err="failed to get container status \"19375b6a1e66e1eaff8df34f0ea3d6d777e9e62713d87c5d5c9e8dfe392842c7\": rpc error: code = NotFound desc = could not find container \"19375b6a1e66e1eaff8df34f0ea3d6d777e9e62713d87c5d5c9e8dfe392842c7\": container with ID starting with 19375b6a1e66e1eaff8df34f0ea3d6d777e9e62713d87c5d5c9e8dfe392842c7 not found: ID does not exist" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.444022 4755 scope.go:117] "RemoveContainer" containerID="c54ec17530b4e4d038c8b62e61581101d256c67d6ecbc8e94ee0221b5fe8e331" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.454084 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj"] Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.456974 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b46b9c4c-8xthj"] Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.467044 4755 scope.go:117] "RemoveContainer" containerID="c54ec17530b4e4d038c8b62e61581101d256c67d6ecbc8e94ee0221b5fe8e331" Feb 24 09:59:49 crc kubenswrapper[4755]: E0224 09:59:49.467857 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c54ec17530b4e4d038c8b62e61581101d256c67d6ecbc8e94ee0221b5fe8e331\": container with ID starting with c54ec17530b4e4d038c8b62e61581101d256c67d6ecbc8e94ee0221b5fe8e331 not found: ID does not exist" containerID="c54ec17530b4e4d038c8b62e61581101d256c67d6ecbc8e94ee0221b5fe8e331" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.467917 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54ec17530b4e4d038c8b62e61581101d256c67d6ecbc8e94ee0221b5fe8e331"} err="failed to get container status \"c54ec17530b4e4d038c8b62e61581101d256c67d6ecbc8e94ee0221b5fe8e331\": rpc error: code = NotFound desc = could not find container \"c54ec17530b4e4d038c8b62e61581101d256c67d6ecbc8e94ee0221b5fe8e331\": container with ID starting with c54ec17530b4e4d038c8b62e61581101d256c67d6ecbc8e94ee0221b5fe8e331 not found: ID does not exist" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.489325 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-config\") pod \"4d335c45-4b85-41ff-a15e-6666b28ab269\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.489366 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-proxy-ca-bundles\") pod \"4d335c45-4b85-41ff-a15e-6666b28ab269\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.489416 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4p5c\" (UniqueName: \"kubernetes.io/projected/4d335c45-4b85-41ff-a15e-6666b28ab269-kube-api-access-j4p5c\") pod \"4d335c45-4b85-41ff-a15e-6666b28ab269\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.489446 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d335c45-4b85-41ff-a15e-6666b28ab269-serving-cert\") pod \"4d335c45-4b85-41ff-a15e-6666b28ab269\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.489463 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-client-ca\") pod \"4d335c45-4b85-41ff-a15e-6666b28ab269\" (UID: \"4d335c45-4b85-41ff-a15e-6666b28ab269\") " Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.489716 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24hfq\" (UniqueName: \"kubernetes.io/projected/a8bce638-dac9-429f-96fc-026923ee4c8e-kube-api-access-24hfq\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.489733 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8bce638-dac9-429f-96fc-026923ee4c8e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.490583 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-client-ca" (OuterVolumeSpecName: "client-ca") pod "4d335c45-4b85-41ff-a15e-6666b28ab269" (UID: "4d335c45-4b85-41ff-a15e-6666b28ab269"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.490716 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4d335c45-4b85-41ff-a15e-6666b28ab269" (UID: "4d335c45-4b85-41ff-a15e-6666b28ab269"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.490985 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-config" (OuterVolumeSpecName: "config") pod "4d335c45-4b85-41ff-a15e-6666b28ab269" (UID: "4d335c45-4b85-41ff-a15e-6666b28ab269"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.493176 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d335c45-4b85-41ff-a15e-6666b28ab269-kube-api-access-j4p5c" (OuterVolumeSpecName: "kube-api-access-j4p5c") pod "4d335c45-4b85-41ff-a15e-6666b28ab269" (UID: "4d335c45-4b85-41ff-a15e-6666b28ab269"). InnerVolumeSpecName "kube-api-access-j4p5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.494661 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d335c45-4b85-41ff-a15e-6666b28ab269-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4d335c45-4b85-41ff-a15e-6666b28ab269" (UID: "4d335c45-4b85-41ff-a15e-6666b28ab269"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.590953 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.591608 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4p5c\" (UniqueName: \"kubernetes.io/projected/4d335c45-4b85-41ff-a15e-6666b28ab269-kube-api-access-j4p5c\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.591731 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.591850 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d335c45-4b85-41ff-a15e-6666b28ab269-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.591953 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d335c45-4b85-41ff-a15e-6666b28ab269-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.763735 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn"] Feb 24 09:59:49 crc kubenswrapper[4755]: I0224 09:59:49.787970 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c5f5c55f4-czxpn"] Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.329101 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d335c45-4b85-41ff-a15e-6666b28ab269" path="/var/lib/kubelet/pods/4d335c45-4b85-41ff-a15e-6666b28ab269/volumes" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.330160 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8bce638-dac9-429f-96fc-026923ee4c8e" path="/var/lib/kubelet/pods/a8bce638-dac9-429f-96fc-026923ee4c8e/volumes" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.484947 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54695856b6-t7rr8"] Feb 24 09:59:50 crc kubenswrapper[4755]: E0224 09:59:50.485374 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" containerName="extract-content" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.485406 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" containerName="extract-content" Feb 24 09:59:50 crc kubenswrapper[4755]: E0224 09:59:50.485426 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" containerName="extract-utilities" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.485444 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" containerName="extract-utilities" Feb 24 09:59:50 crc kubenswrapper[4755]: E0224 09:59:50.485470 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61d7a2b-f731-4982-bab4-8c6db2c8c963" containerName="extract-utilities" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.485486 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61d7a2b-f731-4982-bab4-8c6db2c8c963" containerName="extract-utilities" Feb 24 09:59:50 crc kubenswrapper[4755]: E0224 09:59:50.485509 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" containerName="registry-server" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.485524 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" containerName="registry-server" Feb 24 09:59:50 crc kubenswrapper[4755]: E0224 09:59:50.485543 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bce638-dac9-429f-96fc-026923ee4c8e" containerName="route-controller-manager" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.485559 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bce638-dac9-429f-96fc-026923ee4c8e" containerName="route-controller-manager" Feb 24 09:59:50 crc kubenswrapper[4755]: E0224 09:59:50.485593 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61d7a2b-f731-4982-bab4-8c6db2c8c963" containerName="extract-content" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.485610 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61d7a2b-f731-4982-bab4-8c6db2c8c963" containerName="extract-content" Feb 24 09:59:50 crc kubenswrapper[4755]: E0224 09:59:50.485638 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" containerName="extract-utilities" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.485654 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" containerName="extract-utilities" Feb 24 09:59:50 crc kubenswrapper[4755]: E0224 09:59:50.485677 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" containerName="extract-content" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.485695 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" containerName="extract-content" Feb 24 09:59:50 crc kubenswrapper[4755]: E0224 09:59:50.485724 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61d7a2b-f731-4982-bab4-8c6db2c8c963" containerName="registry-server" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.485741 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61d7a2b-f731-4982-bab4-8c6db2c8c963" containerName="registry-server" Feb 24 09:59:50 crc kubenswrapper[4755]: E0224 09:59:50.485762 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d335c45-4b85-41ff-a15e-6666b28ab269" containerName="controller-manager" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.485779 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d335c45-4b85-41ff-a15e-6666b28ab269" containerName="controller-manager" Feb 24 09:59:50 crc kubenswrapper[4755]: E0224 09:59:50.485804 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" containerName="registry-server" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.485821 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" containerName="registry-server" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.486040 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61d7a2b-f731-4982-bab4-8c6db2c8c963" containerName="registry-server" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.486111 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bce638-dac9-429f-96fc-026923ee4c8e" containerName="route-controller-manager" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.486145 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f634ec6f-7ee6-4789-bc6a-e860f86fe6fa" containerName="registry-server" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.486170 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5cfb137-48b9-4d3d-ab93-7f688d450d2a" containerName="registry-server" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.486194 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d335c45-4b85-41ff-a15e-6666b28ab269" containerName="controller-manager" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.486926 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.490633 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.491452 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc"] Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.491933 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.492508 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.492602 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.496393 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.496700 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.497389 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.497862 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.497961 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.500239 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.506024 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.506202 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.506610 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.520306 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.522618 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54695856b6-t7rr8"] Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.527703 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc"] Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.606479 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e13af10f-6473-4b6f-b1f9-d5d8943426b1-serving-cert\") pod \"route-controller-manager-886c7c4c6-jfblc\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.606837 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m86gd\" (UniqueName: \"kubernetes.io/projected/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-kube-api-access-m86gd\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.607159 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13af10f-6473-4b6f-b1f9-d5d8943426b1-config\") pod \"route-controller-manager-886c7c4c6-jfblc\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.607300 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-serving-cert\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.607405 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92gnh\" (UniqueName: \"kubernetes.io/projected/e13af10f-6473-4b6f-b1f9-d5d8943426b1-kube-api-access-92gnh\") pod \"route-controller-manager-886c7c4c6-jfblc\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.607525 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-proxy-ca-bundles\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.607655 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e13af10f-6473-4b6f-b1f9-d5d8943426b1-client-ca\") pod \"route-controller-manager-886c7c4c6-jfblc\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.607844 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-config\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.608091 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-client-ca\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.709679 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13af10f-6473-4b6f-b1f9-d5d8943426b1-config\") pod \"route-controller-manager-886c7c4c6-jfblc\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.709788 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-serving-cert\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.709835 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92gnh\" (UniqueName: \"kubernetes.io/projected/e13af10f-6473-4b6f-b1f9-d5d8943426b1-kube-api-access-92gnh\") pod \"route-controller-manager-886c7c4c6-jfblc\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.709898 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-proxy-ca-bundles\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.709964 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e13af10f-6473-4b6f-b1f9-d5d8943426b1-client-ca\") pod \"route-controller-manager-886c7c4c6-jfblc\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.710013 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-config\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.710047 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-client-ca\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.710153 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e13af10f-6473-4b6f-b1f9-d5d8943426b1-serving-cert\") pod \"route-controller-manager-886c7c4c6-jfblc\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.710261 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m86gd\" (UniqueName: \"kubernetes.io/projected/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-kube-api-access-m86gd\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.711949 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13af10f-6473-4b6f-b1f9-d5d8943426b1-config\") pod \"route-controller-manager-886c7c4c6-jfblc\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.712135 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-client-ca\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.712565 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-proxy-ca-bundles\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.713365 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e13af10f-6473-4b6f-b1f9-d5d8943426b1-client-ca\") pod \"route-controller-manager-886c7c4c6-jfblc\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.713412 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-config\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.719411 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-serving-cert\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.719493 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e13af10f-6473-4b6f-b1f9-d5d8943426b1-serving-cert\") pod \"route-controller-manager-886c7c4c6-jfblc\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.732974 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m86gd\" (UniqueName: \"kubernetes.io/projected/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-kube-api-access-m86gd\") pod \"controller-manager-54695856b6-t7rr8\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.744112 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92gnh\" (UniqueName: \"kubernetes.io/projected/e13af10f-6473-4b6f-b1f9-d5d8943426b1-kube-api-access-92gnh\") pod \"route-controller-manager-886c7c4c6-jfblc\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.827387 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:50 crc kubenswrapper[4755]: I0224 09:59:50.848472 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.150918 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54695856b6-t7rr8"] Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.210704 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc"] Feb 24 09:59:51 crc kubenswrapper[4755]: W0224 09:59:51.219888 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode13af10f_6473_4b6f_b1f9_d5d8943426b1.slice/crio-43cf6fe18ee31cd9b478c2c3e76e2c2258e6b1782aca7a51382532ff16fe7379 WatchSource:0}: Error finding container 43cf6fe18ee31cd9b478c2c3e76e2c2258e6b1782aca7a51382532ff16fe7379: Status 404 returned error can't find the container with id 43cf6fe18ee31cd9b478c2c3e76e2c2258e6b1782aca7a51382532ff16fe7379 Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.441428 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" event={"ID":"e13af10f-6473-4b6f-b1f9-d5d8943426b1","Type":"ContainerStarted","Data":"56055e99c48806b86a5f2a1e3c0e1dc532a5b7eac722e079e7c60fd794943bb8"} Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.441860 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" event={"ID":"e13af10f-6473-4b6f-b1f9-d5d8943426b1","Type":"ContainerStarted","Data":"43cf6fe18ee31cd9b478c2c3e76e2c2258e6b1782aca7a51382532ff16fe7379"} Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.445015 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.447903 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" event={"ID":"c093fef3-5dad-496a-b7ee-4d25fd8e36ba","Type":"ContainerStarted","Data":"1d3ddb56f7f75afb87b6cf698b851709e2e8f2d75b4992bcdc9c07a41ca9e4f4"} Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.447985 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" event={"ID":"c093fef3-5dad-496a-b7ee-4d25fd8e36ba","Type":"ContainerStarted","Data":"b8eda5abd18b0e7f89256d73dc447d41d1444f6ac2d70d9407c3f60a42a2bd94"} Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.448261 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.449759 4755 patch_prober.go:28] interesting pod/controller-manager-54695856b6-t7rr8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.449834 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" podUID="c093fef3-5dad-496a-b7ee-4d25fd8e36ba" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.480837 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" podStartSLOduration=3.480804728 podStartE2EDuration="3.480804728s" podCreationTimestamp="2026-02-24 09:59:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:59:51.478197431 +0000 UTC m=+295.934720044" watchObservedRunningTime="2026-02-24 09:59:51.480804728 +0000 UTC m=+295.937327311" Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.512312 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" podStartSLOduration=3.5122887289999998 podStartE2EDuration="3.512288729s" podCreationTimestamp="2026-02-24 09:59:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:59:51.507678903 +0000 UTC m=+295.964201526" watchObservedRunningTime="2026-02-24 09:59:51.512288729 +0000 UTC m=+295.968811302" Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.628630 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-srwxs"] Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.695123 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.695181 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.695227 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.695837 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e"} pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.695891 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" containerID="cri-o://2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e" gracePeriod=600 Feb 24 09:59:51 crc kubenswrapper[4755]: I0224 09:59:51.777043 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 09:59:52 crc kubenswrapper[4755]: I0224 09:59:52.459337 4755 generic.go:334] "Generic (PLEG): container finished" podID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerID="2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e" exitCode=0 Feb 24 09:59:52 crc kubenswrapper[4755]: I0224 09:59:52.459440 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerDied","Data":"2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e"} Feb 24 09:59:52 crc kubenswrapper[4755]: I0224 09:59:52.460081 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"6e30cd3f07dec468034b68f42462aa6b03bd99da31c7eea2aa712e6bd5b08ae2"} Feb 24 09:59:52 crc kubenswrapper[4755]: I0224 09:59:52.466397 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.145842 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb"] Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.147264 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.149838 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.150023 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.166604 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb"] Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.276976 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b86f7829-7ff9-4702-89d8-081b0997310e-secret-volume\") pod \"collect-profiles-29532120-98jrb\" (UID: \"b86f7829-7ff9-4702-89d8-081b0997310e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.277094 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b86f7829-7ff9-4702-89d8-081b0997310e-config-volume\") pod \"collect-profiles-29532120-98jrb\" (UID: \"b86f7829-7ff9-4702-89d8-081b0997310e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.277127 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-662hf\" (UniqueName: \"kubernetes.io/projected/b86f7829-7ff9-4702-89d8-081b0997310e-kube-api-access-662hf\") pod \"collect-profiles-29532120-98jrb\" (UID: \"b86f7829-7ff9-4702-89d8-081b0997310e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.378940 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b86f7829-7ff9-4702-89d8-081b0997310e-secret-volume\") pod \"collect-profiles-29532120-98jrb\" (UID: \"b86f7829-7ff9-4702-89d8-081b0997310e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.379004 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b86f7829-7ff9-4702-89d8-081b0997310e-config-volume\") pod \"collect-profiles-29532120-98jrb\" (UID: \"b86f7829-7ff9-4702-89d8-081b0997310e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.379024 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-662hf\" (UniqueName: \"kubernetes.io/projected/b86f7829-7ff9-4702-89d8-081b0997310e-kube-api-access-662hf\") pod \"collect-profiles-29532120-98jrb\" (UID: \"b86f7829-7ff9-4702-89d8-081b0997310e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.380291 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b86f7829-7ff9-4702-89d8-081b0997310e-config-volume\") pod \"collect-profiles-29532120-98jrb\" (UID: \"b86f7829-7ff9-4702-89d8-081b0997310e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.386296 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b86f7829-7ff9-4702-89d8-081b0997310e-secret-volume\") pod \"collect-profiles-29532120-98jrb\" (UID: \"b86f7829-7ff9-4702-89d8-081b0997310e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.399231 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-662hf\" (UniqueName: \"kubernetes.io/projected/b86f7829-7ff9-4702-89d8-081b0997310e-kube-api-access-662hf\") pod \"collect-profiles-29532120-98jrb\" (UID: \"b86f7829-7ff9-4702-89d8-081b0997310e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.467161 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" Feb 24 10:00:00 crc kubenswrapper[4755]: I0224 10:00:00.899230 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb"] Feb 24 10:00:01 crc kubenswrapper[4755]: I0224 10:00:01.534133 4755 generic.go:334] "Generic (PLEG): container finished" podID="b86f7829-7ff9-4702-89d8-081b0997310e" containerID="4a6abd801e87691fbf2ee97f9294fc03074ab69b5b530a925669bf5c4f9d9c1f" exitCode=0 Feb 24 10:00:01 crc kubenswrapper[4755]: I0224 10:00:01.534396 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" event={"ID":"b86f7829-7ff9-4702-89d8-081b0997310e","Type":"ContainerDied","Data":"4a6abd801e87691fbf2ee97f9294fc03074ab69b5b530a925669bf5c4f9d9c1f"} Feb 24 10:00:01 crc kubenswrapper[4755]: I0224 10:00:01.534427 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" event={"ID":"b86f7829-7ff9-4702-89d8-081b0997310e","Type":"ContainerStarted","Data":"5d3b418d82987a436725cb55b0afb9780a8a6fca7791dda2a867bd5e7febc99f"} Feb 24 10:00:03 crc kubenswrapper[4755]: I0224 10:00:03.004476 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" Feb 24 10:00:03 crc kubenswrapper[4755]: I0224 10:00:03.124585 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-662hf\" (UniqueName: \"kubernetes.io/projected/b86f7829-7ff9-4702-89d8-081b0997310e-kube-api-access-662hf\") pod \"b86f7829-7ff9-4702-89d8-081b0997310e\" (UID: \"b86f7829-7ff9-4702-89d8-081b0997310e\") " Feb 24 10:00:03 crc kubenswrapper[4755]: I0224 10:00:03.124673 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b86f7829-7ff9-4702-89d8-081b0997310e-config-volume\") pod \"b86f7829-7ff9-4702-89d8-081b0997310e\" (UID: \"b86f7829-7ff9-4702-89d8-081b0997310e\") " Feb 24 10:00:03 crc kubenswrapper[4755]: I0224 10:00:03.124754 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b86f7829-7ff9-4702-89d8-081b0997310e-secret-volume\") pod \"b86f7829-7ff9-4702-89d8-081b0997310e\" (UID: \"b86f7829-7ff9-4702-89d8-081b0997310e\") " Feb 24 10:00:03 crc kubenswrapper[4755]: I0224 10:00:03.125844 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b86f7829-7ff9-4702-89d8-081b0997310e-config-volume" (OuterVolumeSpecName: "config-volume") pod "b86f7829-7ff9-4702-89d8-081b0997310e" (UID: "b86f7829-7ff9-4702-89d8-081b0997310e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:00:03 crc kubenswrapper[4755]: I0224 10:00:03.126476 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b86f7829-7ff9-4702-89d8-081b0997310e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:03 crc kubenswrapper[4755]: I0224 10:00:03.130774 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86f7829-7ff9-4702-89d8-081b0997310e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b86f7829-7ff9-4702-89d8-081b0997310e" (UID: "b86f7829-7ff9-4702-89d8-081b0997310e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:00:03 crc kubenswrapper[4755]: I0224 10:00:03.131781 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b86f7829-7ff9-4702-89d8-081b0997310e-kube-api-access-662hf" (OuterVolumeSpecName: "kube-api-access-662hf") pod "b86f7829-7ff9-4702-89d8-081b0997310e" (UID: "b86f7829-7ff9-4702-89d8-081b0997310e"). InnerVolumeSpecName "kube-api-access-662hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:00:03 crc kubenswrapper[4755]: I0224 10:00:03.228023 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-662hf\" (UniqueName: \"kubernetes.io/projected/b86f7829-7ff9-4702-89d8-081b0997310e-kube-api-access-662hf\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:03 crc kubenswrapper[4755]: I0224 10:00:03.228129 4755 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b86f7829-7ff9-4702-89d8-081b0997310e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:03 crc kubenswrapper[4755]: I0224 10:00:03.550517 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" event={"ID":"b86f7829-7ff9-4702-89d8-081b0997310e","Type":"ContainerDied","Data":"5d3b418d82987a436725cb55b0afb9780a8a6fca7791dda2a867bd5e7febc99f"} Feb 24 10:00:03 crc kubenswrapper[4755]: I0224 10:00:03.550578 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d3b418d82987a436725cb55b0afb9780a8a6fca7791dda2a867bd5e7febc99f" Feb 24 10:00:03 crc kubenswrapper[4755]: I0224 10:00:03.550586 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb" Feb 24 10:00:08 crc kubenswrapper[4755]: I0224 10:00:08.824044 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54695856b6-t7rr8"] Feb 24 10:00:08 crc kubenswrapper[4755]: I0224 10:00:08.824714 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" podUID="c093fef3-5dad-496a-b7ee-4d25fd8e36ba" containerName="controller-manager" containerID="cri-o://1d3ddb56f7f75afb87b6cf698b851709e2e8f2d75b4992bcdc9c07a41ca9e4f4" gracePeriod=30 Feb 24 10:00:08 crc kubenswrapper[4755]: I0224 10:00:08.914726 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc"] Feb 24 10:00:08 crc kubenswrapper[4755]: I0224 10:00:08.915189 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" podUID="e13af10f-6473-4b6f-b1f9-d5d8943426b1" containerName="route-controller-manager" containerID="cri-o://56055e99c48806b86a5f2a1e3c0e1dc532a5b7eac722e079e7c60fd794943bb8" gracePeriod=30 Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.383870 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.388096 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.520817 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-serving-cert\") pod \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.520906 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m86gd\" (UniqueName: \"kubernetes.io/projected/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-kube-api-access-m86gd\") pod \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.520969 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-client-ca\") pod \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.520995 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e13af10f-6473-4b6f-b1f9-d5d8943426b1-serving-cert\") pod \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.521033 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-proxy-ca-bundles\") pod \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.521089 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13af10f-6473-4b6f-b1f9-d5d8943426b1-config\") pod \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.521121 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e13af10f-6473-4b6f-b1f9-d5d8943426b1-client-ca\") pod \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.521143 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92gnh\" (UniqueName: \"kubernetes.io/projected/e13af10f-6473-4b6f-b1f9-d5d8943426b1-kube-api-access-92gnh\") pod \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\" (UID: \"e13af10f-6473-4b6f-b1f9-d5d8943426b1\") " Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.521188 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-config\") pod \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\" (UID: \"c093fef3-5dad-496a-b7ee-4d25fd8e36ba\") " Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.522029 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13af10f-6473-4b6f-b1f9-d5d8943426b1-config" (OuterVolumeSpecName: "config") pod "e13af10f-6473-4b6f-b1f9-d5d8943426b1" (UID: "e13af10f-6473-4b6f-b1f9-d5d8943426b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.522358 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13af10f-6473-4b6f-b1f9-d5d8943426b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "e13af10f-6473-4b6f-b1f9-d5d8943426b1" (UID: "e13af10f-6473-4b6f-b1f9-d5d8943426b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.522376 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c093fef3-5dad-496a-b7ee-4d25fd8e36ba" (UID: "c093fef3-5dad-496a-b7ee-4d25fd8e36ba"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.522624 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-client-ca" (OuterVolumeSpecName: "client-ca") pod "c093fef3-5dad-496a-b7ee-4d25fd8e36ba" (UID: "c093fef3-5dad-496a-b7ee-4d25fd8e36ba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.522711 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-config" (OuterVolumeSpecName: "config") pod "c093fef3-5dad-496a-b7ee-4d25fd8e36ba" (UID: "c093fef3-5dad-496a-b7ee-4d25fd8e36ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.526005 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13af10f-6473-4b6f-b1f9-d5d8943426b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e13af10f-6473-4b6f-b1f9-d5d8943426b1" (UID: "e13af10f-6473-4b6f-b1f9-d5d8943426b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.526125 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c093fef3-5dad-496a-b7ee-4d25fd8e36ba" (UID: "c093fef3-5dad-496a-b7ee-4d25fd8e36ba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.526259 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-kube-api-access-m86gd" (OuterVolumeSpecName: "kube-api-access-m86gd") pod "c093fef3-5dad-496a-b7ee-4d25fd8e36ba" (UID: "c093fef3-5dad-496a-b7ee-4d25fd8e36ba"). InnerVolumeSpecName "kube-api-access-m86gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.526282 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13af10f-6473-4b6f-b1f9-d5d8943426b1-kube-api-access-92gnh" (OuterVolumeSpecName: "kube-api-access-92gnh") pod "e13af10f-6473-4b6f-b1f9-d5d8943426b1" (UID: "e13af10f-6473-4b6f-b1f9-d5d8943426b1"). InnerVolumeSpecName "kube-api-access-92gnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.588676 4755 generic.go:334] "Generic (PLEG): container finished" podID="e13af10f-6473-4b6f-b1f9-d5d8943426b1" containerID="56055e99c48806b86a5f2a1e3c0e1dc532a5b7eac722e079e7c60fd794943bb8" exitCode=0 Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.588739 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.588745 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" event={"ID":"e13af10f-6473-4b6f-b1f9-d5d8943426b1","Type":"ContainerDied","Data":"56055e99c48806b86a5f2a1e3c0e1dc532a5b7eac722e079e7c60fd794943bb8"} Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.588913 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc" event={"ID":"e13af10f-6473-4b6f-b1f9-d5d8943426b1","Type":"ContainerDied","Data":"43cf6fe18ee31cd9b478c2c3e76e2c2258e6b1782aca7a51382532ff16fe7379"} Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.588996 4755 scope.go:117] "RemoveContainer" containerID="56055e99c48806b86a5f2a1e3c0e1dc532a5b7eac722e079e7c60fd794943bb8" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.589992 4755 generic.go:334] "Generic (PLEG): container finished" podID="c093fef3-5dad-496a-b7ee-4d25fd8e36ba" containerID="1d3ddb56f7f75afb87b6cf698b851709e2e8f2d75b4992bcdc9c07a41ca9e4f4" exitCode=0 Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.590033 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" event={"ID":"c093fef3-5dad-496a-b7ee-4d25fd8e36ba","Type":"ContainerDied","Data":"1d3ddb56f7f75afb87b6cf698b851709e2e8f2d75b4992bcdc9c07a41ca9e4f4"} Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.590055 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" event={"ID":"c093fef3-5dad-496a-b7ee-4d25fd8e36ba","Type":"ContainerDied","Data":"b8eda5abd18b0e7f89256d73dc447d41d1444f6ac2d70d9407c3f60a42a2bd94"} Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.590131 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54695856b6-t7rr8" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.605479 4755 scope.go:117] "RemoveContainer" containerID="56055e99c48806b86a5f2a1e3c0e1dc532a5b7eac722e079e7c60fd794943bb8" Feb 24 10:00:09 crc kubenswrapper[4755]: E0224 10:00:09.606321 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56055e99c48806b86a5f2a1e3c0e1dc532a5b7eac722e079e7c60fd794943bb8\": container with ID starting with 56055e99c48806b86a5f2a1e3c0e1dc532a5b7eac722e079e7c60fd794943bb8 not found: ID does not exist" containerID="56055e99c48806b86a5f2a1e3c0e1dc532a5b7eac722e079e7c60fd794943bb8" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.606356 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56055e99c48806b86a5f2a1e3c0e1dc532a5b7eac722e079e7c60fd794943bb8"} err="failed to get container status \"56055e99c48806b86a5f2a1e3c0e1dc532a5b7eac722e079e7c60fd794943bb8\": rpc error: code = NotFound desc = could not find container \"56055e99c48806b86a5f2a1e3c0e1dc532a5b7eac722e079e7c60fd794943bb8\": container with ID starting with 56055e99c48806b86a5f2a1e3c0e1dc532a5b7eac722e079e7c60fd794943bb8 not found: ID does not exist" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.606388 4755 scope.go:117] "RemoveContainer" containerID="1d3ddb56f7f75afb87b6cf698b851709e2e8f2d75b4992bcdc9c07a41ca9e4f4" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.616250 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc"] Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.620723 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-886c7c4c6-jfblc"] Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.623123 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.623152 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13af10f-6473-4b6f-b1f9-d5d8943426b1-config\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.623165 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e13af10f-6473-4b6f-b1f9-d5d8943426b1-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.623179 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92gnh\" (UniqueName: \"kubernetes.io/projected/e13af10f-6473-4b6f-b1f9-d5d8943426b1-kube-api-access-92gnh\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.623191 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-config\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.623201 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.623213 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m86gd\" (UniqueName: \"kubernetes.io/projected/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-kube-api-access-m86gd\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.623223 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c093fef3-5dad-496a-b7ee-4d25fd8e36ba-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.623233 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e13af10f-6473-4b6f-b1f9-d5d8943426b1-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.627914 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54695856b6-t7rr8"] Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.629957 4755 scope.go:117] "RemoveContainer" containerID="1d3ddb56f7f75afb87b6cf698b851709e2e8f2d75b4992bcdc9c07a41ca9e4f4" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.630196 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54695856b6-t7rr8"] Feb 24 10:00:09 crc kubenswrapper[4755]: E0224 10:00:09.630641 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d3ddb56f7f75afb87b6cf698b851709e2e8f2d75b4992bcdc9c07a41ca9e4f4\": container with ID starting with 1d3ddb56f7f75afb87b6cf698b851709e2e8f2d75b4992bcdc9c07a41ca9e4f4 not found: ID does not exist" containerID="1d3ddb56f7f75afb87b6cf698b851709e2e8f2d75b4992bcdc9c07a41ca9e4f4" Feb 24 10:00:09 crc kubenswrapper[4755]: I0224 10:00:09.630686 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d3ddb56f7f75afb87b6cf698b851709e2e8f2d75b4992bcdc9c07a41ca9e4f4"} err="failed to get container status \"1d3ddb56f7f75afb87b6cf698b851709e2e8f2d75b4992bcdc9c07a41ca9e4f4\": rpc error: code = NotFound desc = could not find container \"1d3ddb56f7f75afb87b6cf698b851709e2e8f2d75b4992bcdc9c07a41ca9e4f4\": container with ID starting with 1d3ddb56f7f75afb87b6cf698b851709e2e8f2d75b4992bcdc9c07a41ca9e4f4 not found: ID does not exist" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.329061 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c093fef3-5dad-496a-b7ee-4d25fd8e36ba" path="/var/lib/kubelet/pods/c093fef3-5dad-496a-b7ee-4d25fd8e36ba/volumes" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.330672 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13af10f-6473-4b6f-b1f9-d5d8943426b1" path="/var/lib/kubelet/pods/e13af10f-6473-4b6f-b1f9-d5d8943426b1/volumes" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.508160 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5"] Feb 24 10:00:10 crc kubenswrapper[4755]: E0224 10:00:10.508488 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13af10f-6473-4b6f-b1f9-d5d8943426b1" containerName="route-controller-manager" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.508508 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13af10f-6473-4b6f-b1f9-d5d8943426b1" containerName="route-controller-manager" Feb 24 10:00:10 crc kubenswrapper[4755]: E0224 10:00:10.508535 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b86f7829-7ff9-4702-89d8-081b0997310e" containerName="collect-profiles" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.508545 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86f7829-7ff9-4702-89d8-081b0997310e" containerName="collect-profiles" Feb 24 10:00:10 crc kubenswrapper[4755]: E0224 10:00:10.508563 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c093fef3-5dad-496a-b7ee-4d25fd8e36ba" containerName="controller-manager" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.508576 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c093fef3-5dad-496a-b7ee-4d25fd8e36ba" containerName="controller-manager" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.508728 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="b86f7829-7ff9-4702-89d8-081b0997310e" containerName="collect-profiles" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.508747 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13af10f-6473-4b6f-b1f9-d5d8943426b1" containerName="route-controller-manager" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.508763 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="c093fef3-5dad-496a-b7ee-4d25fd8e36ba" containerName="controller-manager" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.509360 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.512250 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8c9cf5678-78bjp"] Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.513284 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.521820 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.522571 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.522610 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.522647 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.522738 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.522826 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.522929 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.522966 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.523095 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.523152 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.523687 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.524011 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.525398 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5"] Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.531953 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8c9cf5678-78bjp"] Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.532198 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.537380 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0255ac74-783f-4d88-b6b5-bd8be489d6d7-client-ca\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.537436 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0255ac74-783f-4d88-b6b5-bd8be489d6d7-serving-cert\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.537464 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9-config\") pod \"route-controller-manager-577b4cdcd5-cr4b5\" (UID: \"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9\") " pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.537497 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9-client-ca\") pod \"route-controller-manager-577b4cdcd5-cr4b5\" (UID: \"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9\") " pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.537550 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0255ac74-783f-4d88-b6b5-bd8be489d6d7-config\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.537573 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9-serving-cert\") pod \"route-controller-manager-577b4cdcd5-cr4b5\" (UID: \"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9\") " pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.537597 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0255ac74-783f-4d88-b6b5-bd8be489d6d7-proxy-ca-bundles\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.537633 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhr8n\" (UniqueName: \"kubernetes.io/projected/0255ac74-783f-4d88-b6b5-bd8be489d6d7-kube-api-access-rhr8n\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.537705 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9dpn\" (UniqueName: \"kubernetes.io/projected/4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9-kube-api-access-m9dpn\") pod \"route-controller-manager-577b4cdcd5-cr4b5\" (UID: \"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9\") " pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.638303 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9-client-ca\") pod \"route-controller-manager-577b4cdcd5-cr4b5\" (UID: \"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9\") " pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.638355 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0255ac74-783f-4d88-b6b5-bd8be489d6d7-config\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.638377 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9-serving-cert\") pod \"route-controller-manager-577b4cdcd5-cr4b5\" (UID: \"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9\") " pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.638394 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0255ac74-783f-4d88-b6b5-bd8be489d6d7-proxy-ca-bundles\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.638422 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhr8n\" (UniqueName: \"kubernetes.io/projected/0255ac74-783f-4d88-b6b5-bd8be489d6d7-kube-api-access-rhr8n\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.638461 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9dpn\" (UniqueName: \"kubernetes.io/projected/4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9-kube-api-access-m9dpn\") pod \"route-controller-manager-577b4cdcd5-cr4b5\" (UID: \"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9\") " pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.638504 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0255ac74-783f-4d88-b6b5-bd8be489d6d7-client-ca\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.638528 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0255ac74-783f-4d88-b6b5-bd8be489d6d7-serving-cert\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.638543 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9-config\") pod \"route-controller-manager-577b4cdcd5-cr4b5\" (UID: \"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9\") " pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.639659 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9-config\") pod \"route-controller-manager-577b4cdcd5-cr4b5\" (UID: \"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9\") " pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.640396 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0255ac74-783f-4d88-b6b5-bd8be489d6d7-client-ca\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.640981 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0255ac74-783f-4d88-b6b5-bd8be489d6d7-config\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.641285 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0255ac74-783f-4d88-b6b5-bd8be489d6d7-proxy-ca-bundles\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.641706 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9-client-ca\") pod \"route-controller-manager-577b4cdcd5-cr4b5\" (UID: \"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9\") " pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.643966 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9-serving-cert\") pod \"route-controller-manager-577b4cdcd5-cr4b5\" (UID: \"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9\") " pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.644227 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0255ac74-783f-4d88-b6b5-bd8be489d6d7-serving-cert\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.658765 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9dpn\" (UniqueName: \"kubernetes.io/projected/4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9-kube-api-access-m9dpn\") pod \"route-controller-manager-577b4cdcd5-cr4b5\" (UID: \"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9\") " pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.666027 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhr8n\" (UniqueName: \"kubernetes.io/projected/0255ac74-783f-4d88-b6b5-bd8be489d6d7-kube-api-access-rhr8n\") pod \"controller-manager-8c9cf5678-78bjp\" (UID: \"0255ac74-783f-4d88-b6b5-bd8be489d6d7\") " pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.839880 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:10 crc kubenswrapper[4755]: I0224 10:00:10.856088 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.277740 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5"] Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.283927 4755 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.284913 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.285551 4755 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.285879 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f" gracePeriod=15 Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.285911 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0" gracePeriod=15 Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.285929 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9" gracePeriod=15 Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.285947 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1" gracePeriod=15 Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.285940 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e" gracePeriod=15 Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287033 4755 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 10:00:11 crc kubenswrapper[4755]: E0224 10:00:11.287409 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287439 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 24 10:00:11 crc kubenswrapper[4755]: E0224 10:00:11.287456 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287469 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 10:00:11 crc kubenswrapper[4755]: E0224 10:00:11.287494 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287504 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: E0224 10:00:11.287518 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287529 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: E0224 10:00:11.287544 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287554 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 10:00:11 crc kubenswrapper[4755]: E0224 10:00:11.287567 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287578 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: E0224 10:00:11.287592 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287604 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 10:00:11 crc kubenswrapper[4755]: E0224 10:00:11.287620 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287629 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: E0224 10:00:11.287644 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287656 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287854 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287877 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287888 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287904 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287918 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287930 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.287943 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: E0224 10:00:11.288138 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.288154 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.288294 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.288317 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.447398 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.447817 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.447940 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.448026 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.448081 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.448153 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.448202 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.448297 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: E0224 10:00:11.475443 4755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.220:6443: connect: connection refused" event="&Event{ObjectMeta:{route-controller-manager-577b4cdcd5-cr4b5.1897266927cb6392 openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-577b4cdcd5-cr4b5,UID:4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9,APIVersion:v1,ResourceVersion:29978,FieldPath:spec.containers{route-controller-manager},},Reason:Created,Message:Created container route-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 10:00:11.47464181 +0000 UTC m=+315.931164353,LastTimestamp:2026-02-24 10:00:11.47464181 +0000 UTC m=+315.931164353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549080 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549134 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549141 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549162 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549189 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549239 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549271 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549304 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549337 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549300 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549356 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549311 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549478 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549586 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549622 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.549590 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.609339 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.613644 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.614491 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1" exitCode=0 Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.614520 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0" exitCode=0 Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.614527 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9" exitCode=0 Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.614534 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e" exitCode=2 Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.614623 4755 scope.go:117] "RemoveContainer" containerID="efaaab27068c8aee808b828675e2c2667d17fe82a8ecad597bd31d78429189b6" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.621497 4755 generic.go:334] "Generic (PLEG): container finished" podID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" containerID="32d5935fbce34be3dca711696b778f971fc05581131fa8bdb137141133f9683c" exitCode=0 Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.621572 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"48e88ab9-cdeb-458e-b1ff-5fc96d923829","Type":"ContainerDied","Data":"32d5935fbce34be3dca711696b778f971fc05581131fa8bdb137141133f9683c"} Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.622192 4755 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.622596 4755 status_manager.go:851] "Failed to get status for pod" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.622842 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" event={"ID":"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9","Type":"ContainerStarted","Data":"fc0de43d6299677f2d5a9f6bb62f969b8174d8eb793457f0441fc3a4b9bb2655"} Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.622873 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" event={"ID":"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9","Type":"ContainerStarted","Data":"5b2d5365957660ddb83fa91c242011917b2ca57d94c03e9e6cdfe93f056e19d5"} Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.623263 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.623330 4755 status_manager.go:851] "Failed to get status for pod" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.623569 4755 status_manager.go:851] "Failed to get status for pod" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-577b4cdcd5-cr4b5\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:11 crc kubenswrapper[4755]: I0224 10:00:11.623850 4755 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:12 crc kubenswrapper[4755]: I0224 10:00:12.623248 4755 patch_prober.go:28] interesting pod/route-controller-manager-577b4cdcd5-cr4b5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 10:00:12 crc kubenswrapper[4755]: I0224 10:00:12.623751 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 10:00:12 crc kubenswrapper[4755]: I0224 10:00:12.632656 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 10:00:12 crc kubenswrapper[4755]: I0224 10:00:12.965192 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 10:00:12 crc kubenswrapper[4755]: I0224 10:00:12.965833 4755 status_manager.go:851] "Failed to get status for pod" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:12 crc kubenswrapper[4755]: I0224 10:00:12.966356 4755 status_manager.go:851] "Failed to get status for pod" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-577b4cdcd5-cr4b5\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.066401 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48e88ab9-cdeb-458e-b1ff-5fc96d923829-kube-api-access\") pod \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\" (UID: \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\") " Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.066466 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/48e88ab9-cdeb-458e-b1ff-5fc96d923829-kubelet-dir\") pod \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\" (UID: \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\") " Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.066607 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48e88ab9-cdeb-458e-b1ff-5fc96d923829-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "48e88ab9-cdeb-458e-b1ff-5fc96d923829" (UID: "48e88ab9-cdeb-458e-b1ff-5fc96d923829"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.066702 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/48e88ab9-cdeb-458e-b1ff-5fc96d923829-var-lock\") pod \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\" (UID: \"48e88ab9-cdeb-458e-b1ff-5fc96d923829\") " Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.066747 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48e88ab9-cdeb-458e-b1ff-5fc96d923829-var-lock" (OuterVolumeSpecName: "var-lock") pod "48e88ab9-cdeb-458e-b1ff-5fc96d923829" (UID: "48e88ab9-cdeb-458e-b1ff-5fc96d923829"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.067133 4755 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/48e88ab9-cdeb-458e-b1ff-5fc96d923829-var-lock\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.067159 4755 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/48e88ab9-cdeb-458e-b1ff-5fc96d923829-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.076274 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e88ab9-cdeb-458e-b1ff-5fc96d923829-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "48e88ab9-cdeb-458e-b1ff-5fc96d923829" (UID: "48e88ab9-cdeb-458e-b1ff-5fc96d923829"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.168582 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48e88ab9-cdeb-458e-b1ff-5fc96d923829-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.633982 4755 patch_prober.go:28] interesting pod/route-controller-manager-577b4cdcd5-cr4b5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.634318 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.640012 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.640791 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f" exitCode=0 Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.640847 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e73578d4a14ca21a90a3220b332309898d3cd735a7ea0426788bc28064ab2530" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.642101 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"48e88ab9-cdeb-458e-b1ff-5fc96d923829","Type":"ContainerDied","Data":"02290b6a5d8e98aadb9d3531f883042e49060367982f3339fc20749c8fce5fab"} Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.642125 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02290b6a5d8e98aadb9d3531f883042e49060367982f3339fc20749c8fce5fab" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.642177 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.652405 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.653406 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.653984 4755 status_manager.go:851] "Failed to get status for pod" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-577b4cdcd5-cr4b5\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.654576 4755 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.655050 4755 status_manager.go:851] "Failed to get status for pod" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.655491 4755 status_manager.go:851] "Failed to get status for pod" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-577b4cdcd5-cr4b5\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.655899 4755 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.656352 4755 status_manager.go:851] "Failed to get status for pod" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.777396 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.777500 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.777506 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.777576 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.777731 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.777762 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.778049 4755 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.778108 4755 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:13 crc kubenswrapper[4755]: I0224 10:00:13.778125 4755 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:14 crc kubenswrapper[4755]: I0224 10:00:14.327218 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 24 10:00:14 crc kubenswrapper[4755]: I0224 10:00:14.647988 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:14 crc kubenswrapper[4755]: I0224 10:00:14.649482 4755 status_manager.go:851] "Failed to get status for pod" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:14 crc kubenswrapper[4755]: I0224 10:00:14.650028 4755 status_manager.go:851] "Failed to get status for pod" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-577b4cdcd5-cr4b5\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:14 crc kubenswrapper[4755]: I0224 10:00:14.650651 4755 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:14 crc kubenswrapper[4755]: I0224 10:00:14.654116 4755 status_manager.go:851] "Failed to get status for pod" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:14 crc kubenswrapper[4755]: I0224 10:00:14.657967 4755 status_manager.go:851] "Failed to get status for pod" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-577b4cdcd5-cr4b5\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:14 crc kubenswrapper[4755]: I0224 10:00:14.658865 4755 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:15 crc kubenswrapper[4755]: E0224 10:00:15.515934 4755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.220:6443: connect: connection refused" event="&Event{ObjectMeta:{route-controller-manager-577b4cdcd5-cr4b5.1897266927cb6392 openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-577b4cdcd5-cr4b5,UID:4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9,APIVersion:v1,ResourceVersion:29978,FieldPath:spec.containers{route-controller-manager},},Reason:Created,Message:Created container route-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 10:00:11.47464181 +0000 UTC m=+315.931164353,LastTimestamp:2026-02-24 10:00:11.47464181 +0000 UTC m=+315.931164353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 10:00:16 crc kubenswrapper[4755]: I0224 10:00:16.321778 4755 status_manager.go:851] "Failed to get status for pod" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-577b4cdcd5-cr4b5\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:16 crc kubenswrapper[4755]: I0224 10:00:16.322769 4755 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:16 crc kubenswrapper[4755]: I0224 10:00:16.323414 4755 status_manager.go:851] "Failed to get status for pod" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:16 crc kubenswrapper[4755]: E0224 10:00:16.323451 4755 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.220:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:16 crc kubenswrapper[4755]: I0224 10:00:16.326793 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:16 crc kubenswrapper[4755]: W0224 10:00:16.361212 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-274b7b8bcca33b30d8daff2c2f20ca09851edb89bef10d7b57d176a44c04f30f WatchSource:0}: Error finding container 274b7b8bcca33b30d8daff2c2f20ca09851edb89bef10d7b57d176a44c04f30f: Status 404 returned error can't find the container with id 274b7b8bcca33b30d8daff2c2f20ca09851edb89bef10d7b57d176a44c04f30f Feb 24 10:00:16 crc kubenswrapper[4755]: E0224 10:00:16.604268 4755 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 24 10:00:16 crc kubenswrapper[4755]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c9cf5678-78bjp_openshift-controller-manager_0255ac74-783f-4d88-b6b5-bd8be489d6d7_0(9fc12af645b967684f5b8185c792338ccc1d17627c708e5de490c83979a39b37): error adding pod openshift-controller-manager_controller-manager-8c9cf5678-78bjp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9fc12af645b967684f5b8185c792338ccc1d17627c708e5de490c83979a39b37" Netns:"/var/run/netns/a4430df6-39c4-4ef0-85d0-6569c1c2d54e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c9cf5678-78bjp;K8S_POD_INFRA_CONTAINER_ID=9fc12af645b967684f5b8185c792338ccc1d17627c708e5de490c83979a39b37;K8S_POD_UID=0255ac74-783f-4d88-b6b5-bd8be489d6d7" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c9cf5678-78bjp] networking: Multus: [openshift-controller-manager/controller-manager-8c9cf5678-78bjp/0255ac74-783f-4d88-b6b5-bd8be489d6d7]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: status update failed for pod openshift-controller-manager/controller-manager-8c9cf5678-78bjp: Put "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-8c9cf5678-78bjp/status?timeout=1m0s": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 10:00:16 crc kubenswrapper[4755]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 24 10:00:16 crc kubenswrapper[4755]: > Feb 24 10:00:16 crc kubenswrapper[4755]: E0224 10:00:16.604843 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 24 10:00:16 crc kubenswrapper[4755]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c9cf5678-78bjp_openshift-controller-manager_0255ac74-783f-4d88-b6b5-bd8be489d6d7_0(9fc12af645b967684f5b8185c792338ccc1d17627c708e5de490c83979a39b37): error adding pod openshift-controller-manager_controller-manager-8c9cf5678-78bjp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9fc12af645b967684f5b8185c792338ccc1d17627c708e5de490c83979a39b37" Netns:"/var/run/netns/a4430df6-39c4-4ef0-85d0-6569c1c2d54e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c9cf5678-78bjp;K8S_POD_INFRA_CONTAINER_ID=9fc12af645b967684f5b8185c792338ccc1d17627c708e5de490c83979a39b37;K8S_POD_UID=0255ac74-783f-4d88-b6b5-bd8be489d6d7" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c9cf5678-78bjp] networking: Multus: [openshift-controller-manager/controller-manager-8c9cf5678-78bjp/0255ac74-783f-4d88-b6b5-bd8be489d6d7]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: status update failed for pod openshift-controller-manager/controller-manager-8c9cf5678-78bjp: Put "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-8c9cf5678-78bjp/status?timeout=1m0s": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 10:00:16 crc kubenswrapper[4755]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 24 10:00:16 crc kubenswrapper[4755]: > pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:16 crc kubenswrapper[4755]: E0224 10:00:16.604920 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 24 10:00:16 crc kubenswrapper[4755]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c9cf5678-78bjp_openshift-controller-manager_0255ac74-783f-4d88-b6b5-bd8be489d6d7_0(9fc12af645b967684f5b8185c792338ccc1d17627c708e5de490c83979a39b37): error adding pod openshift-controller-manager_controller-manager-8c9cf5678-78bjp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9fc12af645b967684f5b8185c792338ccc1d17627c708e5de490c83979a39b37" Netns:"/var/run/netns/a4430df6-39c4-4ef0-85d0-6569c1c2d54e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c9cf5678-78bjp;K8S_POD_INFRA_CONTAINER_ID=9fc12af645b967684f5b8185c792338ccc1d17627c708e5de490c83979a39b37;K8S_POD_UID=0255ac74-783f-4d88-b6b5-bd8be489d6d7" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c9cf5678-78bjp] networking: Multus: [openshift-controller-manager/controller-manager-8c9cf5678-78bjp/0255ac74-783f-4d88-b6b5-bd8be489d6d7]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: status update failed for pod openshift-controller-manager/controller-manager-8c9cf5678-78bjp: Put "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-8c9cf5678-78bjp/status?timeout=1m0s": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 10:00:16 crc kubenswrapper[4755]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 24 10:00:16 crc kubenswrapper[4755]: > pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:16 crc kubenswrapper[4755]: E0224 10:00:16.605059 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-8c9cf5678-78bjp_openshift-controller-manager(0255ac74-783f-4d88-b6b5-bd8be489d6d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-8c9cf5678-78bjp_openshift-controller-manager(0255ac74-783f-4d88-b6b5-bd8be489d6d7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c9cf5678-78bjp_openshift-controller-manager_0255ac74-783f-4d88-b6b5-bd8be489d6d7_0(9fc12af645b967684f5b8185c792338ccc1d17627c708e5de490c83979a39b37): error adding pod openshift-controller-manager_controller-manager-8c9cf5678-78bjp to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9fc12af645b967684f5b8185c792338ccc1d17627c708e5de490c83979a39b37\\\" Netns:\\\"/var/run/netns/a4430df6-39c4-4ef0-85d0-6569c1c2d54e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c9cf5678-78bjp;K8S_POD_INFRA_CONTAINER_ID=9fc12af645b967684f5b8185c792338ccc1d17627c708e5de490c83979a39b37;K8S_POD_UID=0255ac74-783f-4d88-b6b5-bd8be489d6d7\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c9cf5678-78bjp] networking: Multus: [openshift-controller-manager/controller-manager-8c9cf5678-78bjp/0255ac74-783f-4d88-b6b5-bd8be489d6d7]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: status update failed for pod openshift-controller-manager/controller-manager-8c9cf5678-78bjp: Put \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-8c9cf5678-78bjp/status?timeout=1m0s\\\": dial tcp 38.102.83.220:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" podUID="0255ac74-783f-4d88-b6b5-bd8be489d6d7" Feb 24 10:00:16 crc kubenswrapper[4755]: I0224 10:00:16.654190 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" podUID="2317257d-494c-48b7-a69c-013e8b1d7d81" containerName="oauth-openshift" containerID="cri-o://9f14a4e345cfd7f30855ba0f74b783b45dd9b472aeaa987e1debcd6d90a99cd3" gracePeriod=15 Feb 24 10:00:16 crc kubenswrapper[4755]: I0224 10:00:16.660679 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:16 crc kubenswrapper[4755]: I0224 10:00:16.660968 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:16 crc kubenswrapper[4755]: I0224 10:00:16.661276 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"274b7b8bcca33b30d8daff2c2f20ca09851edb89bef10d7b57d176a44c04f30f"} Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.068047 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.069112 4755 status_manager.go:851] "Failed to get status for pod" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.069613 4755 status_manager.go:851] "Failed to get status for pod" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-577b4cdcd5-cr4b5\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.069923 4755 status_manager.go:851] "Failed to get status for pod" podUID="2317257d-494c-48b7-a69c-013e8b1d7d81" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-srwxs\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.231105 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-ocp-branding-template\") pod \"2317257d-494c-48b7-a69c-013e8b1d7d81\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.231173 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-audit-policies\") pod \"2317257d-494c-48b7-a69c-013e8b1d7d81\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.231237 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-error\") pod \"2317257d-494c-48b7-a69c-013e8b1d7d81\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.231269 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-serving-cert\") pod \"2317257d-494c-48b7-a69c-013e8b1d7d81\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.231293 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-cliconfig\") pod \"2317257d-494c-48b7-a69c-013e8b1d7d81\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.231362 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-session\") pod \"2317257d-494c-48b7-a69c-013e8b1d7d81\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.231393 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-login\") pod \"2317257d-494c-48b7-a69c-013e8b1d7d81\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.231430 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89zfg\" (UniqueName: \"kubernetes.io/projected/2317257d-494c-48b7-a69c-013e8b1d7d81-kube-api-access-89zfg\") pod \"2317257d-494c-48b7-a69c-013e8b1d7d81\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.231455 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-provider-selection\") pod \"2317257d-494c-48b7-a69c-013e8b1d7d81\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.231478 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-service-ca\") pod \"2317257d-494c-48b7-a69c-013e8b1d7d81\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.231502 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2317257d-494c-48b7-a69c-013e8b1d7d81-audit-dir\") pod \"2317257d-494c-48b7-a69c-013e8b1d7d81\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.231521 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-router-certs\") pod \"2317257d-494c-48b7-a69c-013e8b1d7d81\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.231549 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-idp-0-file-data\") pod \"2317257d-494c-48b7-a69c-013e8b1d7d81\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.231575 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-trusted-ca-bundle\") pod \"2317257d-494c-48b7-a69c-013e8b1d7d81\" (UID: \"2317257d-494c-48b7-a69c-013e8b1d7d81\") " Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.232000 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2317257d-494c-48b7-a69c-013e8b1d7d81-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2317257d-494c-48b7-a69c-013e8b1d7d81" (UID: "2317257d-494c-48b7-a69c-013e8b1d7d81"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.232237 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2317257d-494c-48b7-a69c-013e8b1d7d81" (UID: "2317257d-494c-48b7-a69c-013e8b1d7d81"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.232848 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2317257d-494c-48b7-a69c-013e8b1d7d81" (UID: "2317257d-494c-48b7-a69c-013e8b1d7d81"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.233630 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2317257d-494c-48b7-a69c-013e8b1d7d81" (UID: "2317257d-494c-48b7-a69c-013e8b1d7d81"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.234713 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2317257d-494c-48b7-a69c-013e8b1d7d81" (UID: "2317257d-494c-48b7-a69c-013e8b1d7d81"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.238426 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2317257d-494c-48b7-a69c-013e8b1d7d81" (UID: "2317257d-494c-48b7-a69c-013e8b1d7d81"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.238500 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2317257d-494c-48b7-a69c-013e8b1d7d81-kube-api-access-89zfg" (OuterVolumeSpecName: "kube-api-access-89zfg") pod "2317257d-494c-48b7-a69c-013e8b1d7d81" (UID: "2317257d-494c-48b7-a69c-013e8b1d7d81"). InnerVolumeSpecName "kube-api-access-89zfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.238642 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2317257d-494c-48b7-a69c-013e8b1d7d81" (UID: "2317257d-494c-48b7-a69c-013e8b1d7d81"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.238798 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2317257d-494c-48b7-a69c-013e8b1d7d81" (UID: "2317257d-494c-48b7-a69c-013e8b1d7d81"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.239044 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2317257d-494c-48b7-a69c-013e8b1d7d81" (UID: "2317257d-494c-48b7-a69c-013e8b1d7d81"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.239298 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2317257d-494c-48b7-a69c-013e8b1d7d81" (UID: "2317257d-494c-48b7-a69c-013e8b1d7d81"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.239338 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2317257d-494c-48b7-a69c-013e8b1d7d81" (UID: "2317257d-494c-48b7-a69c-013e8b1d7d81"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.239522 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2317257d-494c-48b7-a69c-013e8b1d7d81" (UID: "2317257d-494c-48b7-a69c-013e8b1d7d81"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.241559 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2317257d-494c-48b7-a69c-013e8b1d7d81" (UID: "2317257d-494c-48b7-a69c-013e8b1d7d81"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:00:17 crc kubenswrapper[4755]: E0224 10:00:17.267839 4755 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 24 10:00:17 crc kubenswrapper[4755]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c9cf5678-78bjp_openshift-controller-manager_0255ac74-783f-4d88-b6b5-bd8be489d6d7_0(0618f9540e255f7ec8a44145ecf6365633e0ec70d9a7cba2c62e626cabacdaf4): error adding pod openshift-controller-manager_controller-manager-8c9cf5678-78bjp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0618f9540e255f7ec8a44145ecf6365633e0ec70d9a7cba2c62e626cabacdaf4" Netns:"/var/run/netns/6202c3fa-66f5-4779-aac3-9827126ba3b0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c9cf5678-78bjp;K8S_POD_INFRA_CONTAINER_ID=0618f9540e255f7ec8a44145ecf6365633e0ec70d9a7cba2c62e626cabacdaf4;K8S_POD_UID=0255ac74-783f-4d88-b6b5-bd8be489d6d7" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c9cf5678-78bjp] networking: Multus: [openshift-controller-manager/controller-manager-8c9cf5678-78bjp/0255ac74-783f-4d88-b6b5-bd8be489d6d7]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-8c9cf5678-78bjp?timeout=1m0s": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 10:00:17 crc kubenswrapper[4755]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 24 10:00:17 crc kubenswrapper[4755]: > Feb 24 10:00:17 crc kubenswrapper[4755]: E0224 10:00:17.267891 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 24 10:00:17 crc kubenswrapper[4755]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c9cf5678-78bjp_openshift-controller-manager_0255ac74-783f-4d88-b6b5-bd8be489d6d7_0(0618f9540e255f7ec8a44145ecf6365633e0ec70d9a7cba2c62e626cabacdaf4): error adding pod openshift-controller-manager_controller-manager-8c9cf5678-78bjp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0618f9540e255f7ec8a44145ecf6365633e0ec70d9a7cba2c62e626cabacdaf4" Netns:"/var/run/netns/6202c3fa-66f5-4779-aac3-9827126ba3b0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c9cf5678-78bjp;K8S_POD_INFRA_CONTAINER_ID=0618f9540e255f7ec8a44145ecf6365633e0ec70d9a7cba2c62e626cabacdaf4;K8S_POD_UID=0255ac74-783f-4d88-b6b5-bd8be489d6d7" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c9cf5678-78bjp] networking: Multus: [openshift-controller-manager/controller-manager-8c9cf5678-78bjp/0255ac74-783f-4d88-b6b5-bd8be489d6d7]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-8c9cf5678-78bjp?timeout=1m0s": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 10:00:17 crc kubenswrapper[4755]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 24 10:00:17 crc kubenswrapper[4755]: > pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:17 crc kubenswrapper[4755]: E0224 10:00:17.267912 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 24 10:00:17 crc kubenswrapper[4755]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c9cf5678-78bjp_openshift-controller-manager_0255ac74-783f-4d88-b6b5-bd8be489d6d7_0(0618f9540e255f7ec8a44145ecf6365633e0ec70d9a7cba2c62e626cabacdaf4): error adding pod openshift-controller-manager_controller-manager-8c9cf5678-78bjp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0618f9540e255f7ec8a44145ecf6365633e0ec70d9a7cba2c62e626cabacdaf4" Netns:"/var/run/netns/6202c3fa-66f5-4779-aac3-9827126ba3b0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c9cf5678-78bjp;K8S_POD_INFRA_CONTAINER_ID=0618f9540e255f7ec8a44145ecf6365633e0ec70d9a7cba2c62e626cabacdaf4;K8S_POD_UID=0255ac74-783f-4d88-b6b5-bd8be489d6d7" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c9cf5678-78bjp] networking: Multus: [openshift-controller-manager/controller-manager-8c9cf5678-78bjp/0255ac74-783f-4d88-b6b5-bd8be489d6d7]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-8c9cf5678-78bjp?timeout=1m0s": dial tcp 38.102.83.220:6443: connect: connection refused Feb 24 10:00:17 crc kubenswrapper[4755]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 24 10:00:17 crc kubenswrapper[4755]: > pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:17 crc kubenswrapper[4755]: E0224 10:00:17.267972 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-8c9cf5678-78bjp_openshift-controller-manager(0255ac74-783f-4d88-b6b5-bd8be489d6d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-8c9cf5678-78bjp_openshift-controller-manager(0255ac74-783f-4d88-b6b5-bd8be489d6d7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c9cf5678-78bjp_openshift-controller-manager_0255ac74-783f-4d88-b6b5-bd8be489d6d7_0(0618f9540e255f7ec8a44145ecf6365633e0ec70d9a7cba2c62e626cabacdaf4): error adding pod openshift-controller-manager_controller-manager-8c9cf5678-78bjp to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"0618f9540e255f7ec8a44145ecf6365633e0ec70d9a7cba2c62e626cabacdaf4\\\" Netns:\\\"/var/run/netns/6202c3fa-66f5-4779-aac3-9827126ba3b0\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c9cf5678-78bjp;K8S_POD_INFRA_CONTAINER_ID=0618f9540e255f7ec8a44145ecf6365633e0ec70d9a7cba2c62e626cabacdaf4;K8S_POD_UID=0255ac74-783f-4d88-b6b5-bd8be489d6d7\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c9cf5678-78bjp] networking: Multus: [openshift-controller-manager/controller-manager-8c9cf5678-78bjp/0255ac74-783f-4d88-b6b5-bd8be489d6d7]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-8c9cf5678-78bjp in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-8c9cf5678-78bjp?timeout=1m0s\\\": dial tcp 38.102.83.220:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" podUID="0255ac74-783f-4d88-b6b5-bd8be489d6d7" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.332682 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.332713 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.332724 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89zfg\" (UniqueName: \"kubernetes.io/projected/2317257d-494c-48b7-a69c-013e8b1d7d81-kube-api-access-89zfg\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.332736 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.332747 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.332757 4755 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2317257d-494c-48b7-a69c-013e8b1d7d81-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.332767 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.332963 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.332977 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.332990 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.333001 4755 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.333011 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.333023 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.333033 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2317257d-494c-48b7-a69c-013e8b1d7d81-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.671033 4755 generic.go:334] "Generic (PLEG): container finished" podID="2317257d-494c-48b7-a69c-013e8b1d7d81" containerID="9f14a4e345cfd7f30855ba0f74b783b45dd9b472aeaa987e1debcd6d90a99cd3" exitCode=0 Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.671144 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.671124 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" event={"ID":"2317257d-494c-48b7-a69c-013e8b1d7d81","Type":"ContainerDied","Data":"9f14a4e345cfd7f30855ba0f74b783b45dd9b472aeaa987e1debcd6d90a99cd3"} Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.671248 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" event={"ID":"2317257d-494c-48b7-a69c-013e8b1d7d81","Type":"ContainerDied","Data":"acbd527aff0d913682efb9feff4bf93db2c9d9181e282bd7edab74df098f15d0"} Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.671278 4755 scope.go:117] "RemoveContainer" containerID="9f14a4e345cfd7f30855ba0f74b783b45dd9b472aeaa987e1debcd6d90a99cd3" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.672145 4755 status_manager.go:851] "Failed to get status for pod" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-577b4cdcd5-cr4b5\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.672695 4755 status_manager.go:851] "Failed to get status for pod" podUID="2317257d-494c-48b7-a69c-013e8b1d7d81" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-srwxs\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.673231 4755 status_manager.go:851] "Failed to get status for pod" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.673687 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"58c026702587702fcdb5a43e2e28daa6ac14ac6ea385c7d176691baf9104bcaf"} Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.674565 4755 status_manager.go:851] "Failed to get status for pod" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:17 crc kubenswrapper[4755]: E0224 10:00:17.675308 4755 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.220:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.675341 4755 status_manager.go:851] "Failed to get status for pod" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-577b4cdcd5-cr4b5\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.676036 4755 status_manager.go:851] "Failed to get status for pod" podUID="2317257d-494c-48b7-a69c-013e8b1d7d81" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-srwxs\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.693164 4755 status_manager.go:851] "Failed to get status for pod" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.693911 4755 status_manager.go:851] "Failed to get status for pod" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-577b4cdcd5-cr4b5\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.694576 4755 status_manager.go:851] "Failed to get status for pod" podUID="2317257d-494c-48b7-a69c-013e8b1d7d81" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-srwxs\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.703720 4755 scope.go:117] "RemoveContainer" containerID="9f14a4e345cfd7f30855ba0f74b783b45dd9b472aeaa987e1debcd6d90a99cd3" Feb 24 10:00:17 crc kubenswrapper[4755]: E0224 10:00:17.704683 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f14a4e345cfd7f30855ba0f74b783b45dd9b472aeaa987e1debcd6d90a99cd3\": container with ID starting with 9f14a4e345cfd7f30855ba0f74b783b45dd9b472aeaa987e1debcd6d90a99cd3 not found: ID does not exist" containerID="9f14a4e345cfd7f30855ba0f74b783b45dd9b472aeaa987e1debcd6d90a99cd3" Feb 24 10:00:17 crc kubenswrapper[4755]: I0224 10:00:17.704749 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f14a4e345cfd7f30855ba0f74b783b45dd9b472aeaa987e1debcd6d90a99cd3"} err="failed to get container status \"9f14a4e345cfd7f30855ba0f74b783b45dd9b472aeaa987e1debcd6d90a99cd3\": rpc error: code = NotFound desc = could not find container \"9f14a4e345cfd7f30855ba0f74b783b45dd9b472aeaa987e1debcd6d90a99cd3\": container with ID starting with 9f14a4e345cfd7f30855ba0f74b783b45dd9b472aeaa987e1debcd6d90a99cd3 not found: ID does not exist" Feb 24 10:00:18 crc kubenswrapper[4755]: E0224 10:00:18.681534 4755 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.220:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:21 crc kubenswrapper[4755]: E0224 10:00:21.050181 4755 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:21 crc kubenswrapper[4755]: E0224 10:00:21.051285 4755 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:21 crc kubenswrapper[4755]: E0224 10:00:21.051818 4755 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:21 crc kubenswrapper[4755]: E0224 10:00:21.052357 4755 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:21 crc kubenswrapper[4755]: E0224 10:00:21.052967 4755 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:21 crc kubenswrapper[4755]: I0224 10:00:21.053023 4755 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 24 10:00:21 crc kubenswrapper[4755]: E0224 10:00:21.053535 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="200ms" Feb 24 10:00:21 crc kubenswrapper[4755]: E0224 10:00:21.254352 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="400ms" Feb 24 10:00:21 crc kubenswrapper[4755]: E0224 10:00:21.655834 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="800ms" Feb 24 10:00:21 crc kubenswrapper[4755]: I0224 10:00:21.841035 4755 patch_prober.go:28] interesting pod/route-controller-manager-577b4cdcd5-cr4b5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 10:00:21 crc kubenswrapper[4755]: I0224 10:00:21.841121 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.315504 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.316567 4755 status_manager.go:851] "Failed to get status for pod" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.317143 4755 status_manager.go:851] "Failed to get status for pod" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-577b4cdcd5-cr4b5\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.317674 4755 status_manager.go:851] "Failed to get status for pod" podUID="2317257d-494c-48b7-a69c-013e8b1d7d81" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-srwxs\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.339249 4755 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1574b657-3607-40b8-9c2e-1a056ab20b00" Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.339283 4755 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1574b657-3607-40b8-9c2e-1a056ab20b00" Feb 24 10:00:22 crc kubenswrapper[4755]: E0224 10:00:22.339688 4755 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.340178 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:22 crc kubenswrapper[4755]: W0224 10:00:22.358132 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-3b1407408a61c2de7e597429f91b8138d5063e65e70c6c904770f04b75d0896c WatchSource:0}: Error finding container 3b1407408a61c2de7e597429f91b8138d5063e65e70c6c904770f04b75d0896c: Status 404 returned error can't find the container with id 3b1407408a61c2de7e597429f91b8138d5063e65e70c6c904770f04b75d0896c Feb 24 10:00:22 crc kubenswrapper[4755]: E0224 10:00:22.456227 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="1.6s" Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.707471 4755 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="507925cc5646bdb8e73730846163c6b93a21ecfcddc53485a86322e907687439" exitCode=0 Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.707597 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"507925cc5646bdb8e73730846163c6b93a21ecfcddc53485a86322e907687439"} Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.707703 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3b1407408a61c2de7e597429f91b8138d5063e65e70c6c904770f04b75d0896c"} Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.707924 4755 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1574b657-3607-40b8-9c2e-1a056ab20b00" Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.707936 4755 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1574b657-3607-40b8-9c2e-1a056ab20b00" Feb 24 10:00:22 crc kubenswrapper[4755]: E0224 10:00:22.708261 4755 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.709267 4755 status_manager.go:851] "Failed to get status for pod" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.709488 4755 status_manager.go:851] "Failed to get status for pod" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-577b4cdcd5-cr4b5\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:22 crc kubenswrapper[4755]: I0224 10:00:22.709804 4755 status_manager.go:851] "Failed to get status for pod" podUID="2317257d-494c-48b7-a69c-013e8b1d7d81" pod="openshift-authentication/oauth-openshift-558db77b4-srwxs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-srwxs\": dial tcp 38.102.83.220:6443: connect: connection refused" Feb 24 10:00:23 crc kubenswrapper[4755]: I0224 10:00:23.715409 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 10:00:23 crc kubenswrapper[4755]: I0224 10:00:23.716487 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 24 10:00:23 crc kubenswrapper[4755]: I0224 10:00:23.716517 4755 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90" exitCode=1 Feb 24 10:00:23 crc kubenswrapper[4755]: I0224 10:00:23.716590 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90"} Feb 24 10:00:23 crc kubenswrapper[4755]: I0224 10:00:23.719037 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5e36a84773ab8a5e0bc802c8c259b0523b81e77909d5096046912bb6e148cf8c"} Feb 24 10:00:23 crc kubenswrapper[4755]: I0224 10:00:23.719083 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"42c5c00b04ce895d13646fb74f3370a95279067d6058a2371fc13c3499808311"} Feb 24 10:00:23 crc kubenswrapper[4755]: I0224 10:00:23.719094 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c4da4397dce40357477540b5c0863b8e4e2524615e49bb24c4efc0e682e0d30a"} Feb 24 10:00:23 crc kubenswrapper[4755]: I0224 10:00:23.719105 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6de6c1158bd3c4bff52d6b32a320486b1bf56dc114a035cc24d7b126b836e670"} Feb 24 10:00:23 crc kubenswrapper[4755]: I0224 10:00:23.719568 4755 scope.go:117] "RemoveContainer" containerID="56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90" Feb 24 10:00:24 crc kubenswrapper[4755]: I0224 10:00:24.729124 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 10:00:24 crc kubenswrapper[4755]: I0224 10:00:24.730031 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 24 10:00:24 crc kubenswrapper[4755]: I0224 10:00:24.730252 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7442f602fa182fa577c6a322abb3a2faffe02dd41d56844566045f692e2eb34b"} Feb 24 10:00:24 crc kubenswrapper[4755]: I0224 10:00:24.734359 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7d3baa2f62776b7fa025cd8b120da6f3df5b7276d0298213b73689066853529b"} Feb 24 10:00:24 crc kubenswrapper[4755]: I0224 10:00:24.734577 4755 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1574b657-3607-40b8-9c2e-1a056ab20b00" Feb 24 10:00:24 crc kubenswrapper[4755]: I0224 10:00:24.734609 4755 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1574b657-3607-40b8-9c2e-1a056ab20b00" Feb 24 10:00:24 crc kubenswrapper[4755]: I0224 10:00:24.734578 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:25 crc kubenswrapper[4755]: I0224 10:00:25.385829 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 10:00:26 crc kubenswrapper[4755]: I0224 10:00:26.451126 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 10:00:26 crc kubenswrapper[4755]: I0224 10:00:26.451532 4755 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 24 10:00:26 crc kubenswrapper[4755]: I0224 10:00:26.451601 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 24 10:00:27 crc kubenswrapper[4755]: I0224 10:00:27.340342 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:27 crc kubenswrapper[4755]: I0224 10:00:27.340412 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:27 crc kubenswrapper[4755]: I0224 10:00:27.347156 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:29 crc kubenswrapper[4755]: I0224 10:00:29.316087 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:29 crc kubenswrapper[4755]: I0224 10:00:29.316899 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:29 crc kubenswrapper[4755]: I0224 10:00:29.742461 4755 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:29 crc kubenswrapper[4755]: I0224 10:00:29.770685 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" event={"ID":"0255ac74-783f-4d88-b6b5-bd8be489d6d7","Type":"ContainerStarted","Data":"685f2751e7ab6f3435b5b08c099bf50a9b4a05c2a6519671f44c7e57390a92e9"} Feb 24 10:00:29 crc kubenswrapper[4755]: I0224 10:00:29.770745 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" event={"ID":"0255ac74-783f-4d88-b6b5-bd8be489d6d7","Type":"ContainerStarted","Data":"ff1579681a2a1d329af7f00dfecb4646d651ead278a077ecfd3ab135897ef7e7"} Feb 24 10:00:29 crc kubenswrapper[4755]: I0224 10:00:29.771683 4755 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1574b657-3607-40b8-9c2e-1a056ab20b00" Feb 24 10:00:29 crc kubenswrapper[4755]: I0224 10:00:29.771709 4755 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1574b657-3607-40b8-9c2e-1a056ab20b00" Feb 24 10:00:29 crc kubenswrapper[4755]: I0224 10:00:29.774873 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:29 crc kubenswrapper[4755]: I0224 10:00:29.787747 4755 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0f0144a5-8435-43a3-8b62-b3f9d9df9427" Feb 24 10:00:30 crc kubenswrapper[4755]: I0224 10:00:30.777623 4755 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1574b657-3607-40b8-9c2e-1a056ab20b00" Feb 24 10:00:30 crc kubenswrapper[4755]: I0224 10:00:30.778115 4755 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1574b657-3607-40b8-9c2e-1a056ab20b00" Feb 24 10:00:30 crc kubenswrapper[4755]: I0224 10:00:30.857294 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:30 crc kubenswrapper[4755]: I0224 10:00:30.866627 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.423679 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.423758 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.423871 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.426035 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.427460 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.428550 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.436223 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.436604 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.442973 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.450184 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.525189 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.525296 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.528554 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.533811 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.542491 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82775556-3991-45ab-ac50-7ef81cafeaee-metrics-certs\") pod \"network-metrics-daemon-98t22\" (UID: \"82775556-3991-45ab-ac50-7ef81cafeaee\") " pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.637389 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.648101 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.664126 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.666160 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.672040 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98t22" Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.842101 4755 patch_prober.go:28] interesting pod/route-controller-manager-577b4cdcd5-cr4b5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 10:00:31 crc kubenswrapper[4755]: I0224 10:00:31.842402 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 10:00:32 crc kubenswrapper[4755]: W0224 10:00:32.166395 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-bd7896085161ac463ebcd1521f2e0ffa221c25363057af72cfd0a964fa137faa WatchSource:0}: Error finding container bd7896085161ac463ebcd1521f2e0ffa221c25363057af72cfd0a964fa137faa: Status 404 returned error can't find the container with id bd7896085161ac463ebcd1521f2e0ffa221c25363057af72cfd0a964fa137faa Feb 24 10:00:32 crc kubenswrapper[4755]: W0224 10:00:32.224270 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82775556_3991_45ab_ac50_7ef81cafeaee.slice/crio-c7f40c20648d31441ef771634b3f2c18780e1bb8b58b178fcd64e0b75b6cc052 WatchSource:0}: Error finding container c7f40c20648d31441ef771634b3f2c18780e1bb8b58b178fcd64e0b75b6cc052: Status 404 returned error can't find the container with id c7f40c20648d31441ef771634b3f2c18780e1bb8b58b178fcd64e0b75b6cc052 Feb 24 10:00:32 crc kubenswrapper[4755]: W0224 10:00:32.236362 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-26b519c462eb94ba30519ffcab11b3b6944db0c28d731afa41c2734c3c15678f WatchSource:0}: Error finding container 26b519c462eb94ba30519ffcab11b3b6944db0c28d731afa41c2734c3c15678f: Status 404 returned error can't find the container with id 26b519c462eb94ba30519ffcab11b3b6944db0c28d731afa41c2734c3c15678f Feb 24 10:00:32 crc kubenswrapper[4755]: I0224 10:00:32.803489 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bffc6205d16dc5bbcf0390737ffedcf96cfa4a723c8d4a51e767de83b2ac4a82"} Feb 24 10:00:32 crc kubenswrapper[4755]: I0224 10:00:32.803847 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"26b519c462eb94ba30519ffcab11b3b6944db0c28d731afa41c2734c3c15678f"} Feb 24 10:00:32 crc kubenswrapper[4755]: I0224 10:00:32.805806 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-98t22" event={"ID":"82775556-3991-45ab-ac50-7ef81cafeaee","Type":"ContainerStarted","Data":"8cdc13f04017634346c62ab1ae27d7c9e33b95a6617af7d1aa5111c24348141a"} Feb 24 10:00:32 crc kubenswrapper[4755]: I0224 10:00:32.805851 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-98t22" event={"ID":"82775556-3991-45ab-ac50-7ef81cafeaee","Type":"ContainerStarted","Data":"1a5073888cd962228b67b4ccc62da60eb30d6f64a0fddd02ad9499c9807b0aaa"} Feb 24 10:00:32 crc kubenswrapper[4755]: I0224 10:00:32.805865 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-98t22" event={"ID":"82775556-3991-45ab-ac50-7ef81cafeaee","Type":"ContainerStarted","Data":"c7f40c20648d31441ef771634b3f2c18780e1bb8b58b178fcd64e0b75b6cc052"} Feb 24 10:00:32 crc kubenswrapper[4755]: I0224 10:00:32.808045 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d9b3eaaa944a34cb5d9627b10a99011400dfd94e8cd4d392adf5b30e1bcc659f"} Feb 24 10:00:32 crc kubenswrapper[4755]: I0224 10:00:32.808090 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6bb55c1c3a653a358b77ac4b55c0a9d5a15db3791cd1bd4f2f8fb3d9a0c64425"} Feb 24 10:00:32 crc kubenswrapper[4755]: I0224 10:00:32.808470 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 10:00:32 crc kubenswrapper[4755]: I0224 10:00:32.810297 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ef335c7994a2404d0b60cc6e662c8e0eb1d30192df510b8d6be37d4ccd22d3b4"} Feb 24 10:00:32 crc kubenswrapper[4755]: I0224 10:00:32.810327 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bd7896085161ac463ebcd1521f2e0ffa221c25363057af72cfd0a964fa137faa"} Feb 24 10:00:33 crc kubenswrapper[4755]: I0224 10:00:33.819522 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 24 10:00:33 crc kubenswrapper[4755]: I0224 10:00:33.819835 4755 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="ef335c7994a2404d0b60cc6e662c8e0eb1d30192df510b8d6be37d4ccd22d3b4" exitCode=255 Feb 24 10:00:33 crc kubenswrapper[4755]: I0224 10:00:33.819883 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"ef335c7994a2404d0b60cc6e662c8e0eb1d30192df510b8d6be37d4ccd22d3b4"} Feb 24 10:00:33 crc kubenswrapper[4755]: I0224 10:00:33.820441 4755 scope.go:117] "RemoveContainer" containerID="ef335c7994a2404d0b60cc6e662c8e0eb1d30192df510b8d6be37d4ccd22d3b4" Feb 24 10:00:34 crc kubenswrapper[4755]: I0224 10:00:34.828963 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 24 10:00:34 crc kubenswrapper[4755]: I0224 10:00:34.829324 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4ca628c241a9cabf8b8da1fae156cc703252c3e22e07b4073f1703cf4cd9ea81"} Feb 24 10:00:35 crc kubenswrapper[4755]: I0224 10:00:35.793467 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 10:00:35 crc kubenswrapper[4755]: I0224 10:00:35.842211 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 24 10:00:35 crc kubenswrapper[4755]: I0224 10:00:35.843113 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 24 10:00:35 crc kubenswrapper[4755]: I0224 10:00:35.843207 4755 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="4ca628c241a9cabf8b8da1fae156cc703252c3e22e07b4073f1703cf4cd9ea81" exitCode=255 Feb 24 10:00:35 crc kubenswrapper[4755]: I0224 10:00:35.843310 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"4ca628c241a9cabf8b8da1fae156cc703252c3e22e07b4073f1703cf4cd9ea81"} Feb 24 10:00:35 crc kubenswrapper[4755]: I0224 10:00:35.843352 4755 scope.go:117] "RemoveContainer" containerID="ef335c7994a2404d0b60cc6e662c8e0eb1d30192df510b8d6be37d4ccd22d3b4" Feb 24 10:00:35 crc kubenswrapper[4755]: I0224 10:00:35.843797 4755 scope.go:117] "RemoveContainer" containerID="4ca628c241a9cabf8b8da1fae156cc703252c3e22e07b4073f1703cf4cd9ea81" Feb 24 10:00:35 crc kubenswrapper[4755]: E0224 10:00:35.844077 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 10:00:36 crc kubenswrapper[4755]: I0224 10:00:36.195436 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 10:00:36 crc kubenswrapper[4755]: I0224 10:00:36.212377 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 10:00:36 crc kubenswrapper[4755]: I0224 10:00:36.368852 4755 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0f0144a5-8435-43a3-8b62-b3f9d9df9427" Feb 24 10:00:36 crc kubenswrapper[4755]: I0224 10:00:36.425875 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 24 10:00:36 crc kubenswrapper[4755]: I0224 10:00:36.451900 4755 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 24 10:00:36 crc kubenswrapper[4755]: I0224 10:00:36.451984 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 24 10:00:36 crc kubenswrapper[4755]: I0224 10:00:36.853289 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 24 10:00:37 crc kubenswrapper[4755]: I0224 10:00:37.565840 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 10:00:37 crc kubenswrapper[4755]: I0224 10:00:37.614157 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 10:00:37 crc kubenswrapper[4755]: I0224 10:00:37.635385 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 24 10:00:37 crc kubenswrapper[4755]: I0224 10:00:37.707577 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 10:00:37 crc kubenswrapper[4755]: I0224 10:00:37.865868 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 10:00:37 crc kubenswrapper[4755]: I0224 10:00:37.984305 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 10:00:38 crc kubenswrapper[4755]: I0224 10:00:38.031895 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 10:00:38 crc kubenswrapper[4755]: I0224 10:00:38.439443 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 24 10:00:38 crc kubenswrapper[4755]: I0224 10:00:38.661425 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 10:00:38 crc kubenswrapper[4755]: I0224 10:00:38.854193 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 10:00:38 crc kubenswrapper[4755]: I0224 10:00:38.903500 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 10:00:38 crc kubenswrapper[4755]: I0224 10:00:38.963333 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 10:00:39 crc kubenswrapper[4755]: I0224 10:00:39.057671 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 10:00:39 crc kubenswrapper[4755]: I0224 10:00:39.127314 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 10:00:39 crc kubenswrapper[4755]: I0224 10:00:39.397034 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 10:00:39 crc kubenswrapper[4755]: I0224 10:00:39.519151 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 10:00:39 crc kubenswrapper[4755]: I0224 10:00:39.757941 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 24 10:00:39 crc kubenswrapper[4755]: I0224 10:00:39.783126 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 24 10:00:40 crc kubenswrapper[4755]: I0224 10:00:40.084205 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 10:00:40 crc kubenswrapper[4755]: I0224 10:00:40.412390 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 10:00:40 crc kubenswrapper[4755]: I0224 10:00:40.482357 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 24 10:00:40 crc kubenswrapper[4755]: I0224 10:00:40.934577 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.028733 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.166711 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.210319 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.239541 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.249449 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.454532 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.561950 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.620459 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.652923 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.834735 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.840682 4755 patch_prober.go:28] interesting pod/route-controller-manager-577b4cdcd5-cr4b5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": read tcp 10.217.0.2:48638->10.217.0.66:8443: read: connection reset by peer" start-of-body= Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.840775 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": read tcp 10.217.0.2:48638->10.217.0.66:8443: read: connection reset by peer" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.890228 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-577b4cdcd5-cr4b5_4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9/route-controller-manager/0.log" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.890337 4755 generic.go:334] "Generic (PLEG): container finished" podID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" containerID="fc0de43d6299677f2d5a9f6bb62f969b8174d8eb793457f0441fc3a4b9bb2655" exitCode=255 Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.890426 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" event={"ID":"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9","Type":"ContainerDied","Data":"fc0de43d6299677f2d5a9f6bb62f969b8174d8eb793457f0441fc3a4b9bb2655"} Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.891461 4755 scope.go:117] "RemoveContainer" containerID="fc0de43d6299677f2d5a9f6bb62f969b8174d8eb793457f0441fc3a4b9bb2655" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.899113 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 10:00:41 crc kubenswrapper[4755]: I0224 10:00:41.983667 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 10:00:42 crc kubenswrapper[4755]: I0224 10:00:42.580096 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 10:00:42 crc kubenswrapper[4755]: I0224 10:00:42.748309 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 10:00:42 crc kubenswrapper[4755]: I0224 10:00:42.806243 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 24 10:00:42 crc kubenswrapper[4755]: I0224 10:00:42.899489 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-577b4cdcd5-cr4b5_4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9/route-controller-manager/0.log" Feb 24 10:00:42 crc kubenswrapper[4755]: I0224 10:00:42.899575 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" event={"ID":"4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9","Type":"ContainerStarted","Data":"4e50f84d6e8656c1199f38429e65a43b83d1de79a12cf62ba27c97634d5bad55"} Feb 24 10:00:42 crc kubenswrapper[4755]: I0224 10:00:42.900377 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:00:42 crc kubenswrapper[4755]: I0224 10:00:42.941573 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 10:00:43 crc kubenswrapper[4755]: I0224 10:00:43.123993 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 10:00:43 crc kubenswrapper[4755]: I0224 10:00:43.237591 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 24 10:00:43 crc kubenswrapper[4755]: I0224 10:00:43.263915 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 24 10:00:43 crc kubenswrapper[4755]: I0224 10:00:43.272883 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 10:00:43 crc kubenswrapper[4755]: I0224 10:00:43.694129 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 10:00:43 crc kubenswrapper[4755]: I0224 10:00:43.722206 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 10:00:43 crc kubenswrapper[4755]: I0224 10:00:43.885164 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 10:00:43 crc kubenswrapper[4755]: I0224 10:00:43.900305 4755 patch_prober.go:28] interesting pod/route-controller-manager-577b4cdcd5-cr4b5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 10:00:43 crc kubenswrapper[4755]: I0224 10:00:43.900366 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 10:00:43 crc kubenswrapper[4755]: I0224 10:00:43.918853 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 10:00:43 crc kubenswrapper[4755]: I0224 10:00:43.962224 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 10:00:43 crc kubenswrapper[4755]: I0224 10:00:43.988520 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 10:00:44 crc kubenswrapper[4755]: I0224 10:00:44.148700 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 24 10:00:44 crc kubenswrapper[4755]: I0224 10:00:44.336716 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 10:00:44 crc kubenswrapper[4755]: I0224 10:00:44.560483 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 10:00:44 crc kubenswrapper[4755]: I0224 10:00:44.651368 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 10:00:44 crc kubenswrapper[4755]: I0224 10:00:44.716163 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 10:00:44 crc kubenswrapper[4755]: I0224 10:00:44.758193 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 10:00:44 crc kubenswrapper[4755]: I0224 10:00:44.906683 4755 patch_prober.go:28] interesting pod/route-controller-manager-577b4cdcd5-cr4b5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 10:00:44 crc kubenswrapper[4755]: I0224 10:00:44.906783 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 10:00:44 crc kubenswrapper[4755]: I0224 10:00:44.917899 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 24 10:00:44 crc kubenswrapper[4755]: I0224 10:00:44.948801 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 10:00:45 crc kubenswrapper[4755]: I0224 10:00:45.108271 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 10:00:45 crc kubenswrapper[4755]: I0224 10:00:45.122741 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 10:00:45 crc kubenswrapper[4755]: I0224 10:00:45.382482 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 10:00:45 crc kubenswrapper[4755]: I0224 10:00:45.584325 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 24 10:00:45 crc kubenswrapper[4755]: I0224 10:00:45.603201 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 10:00:45 crc kubenswrapper[4755]: I0224 10:00:45.651135 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 10:00:45 crc kubenswrapper[4755]: I0224 10:00:45.734896 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 10:00:45 crc kubenswrapper[4755]: I0224 10:00:45.781765 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 10:00:45 crc kubenswrapper[4755]: I0224 10:00:45.915366 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 10:00:45 crc kubenswrapper[4755]: I0224 10:00:45.923147 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.107628 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.217853 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.280467 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.302724 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.453106 4755 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.453202 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.453287 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.455929 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"7442f602fa182fa577c6a322abb3a2faffe02dd41d56844566045f692e2eb34b"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.456247 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://7442f602fa182fa577c6a322abb3a2faffe02dd41d56844566045f692e2eb34b" gracePeriod=30 Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.480547 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.672854 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.676831 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.684537 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.692546 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.698478 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.724545 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.766828 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.837383 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.898359 4755 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.934140 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 10:00:46 crc kubenswrapper[4755]: I0224 10:00:46.936748 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 10:00:47 crc kubenswrapper[4755]: I0224 10:00:47.163937 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 10:00:47 crc kubenswrapper[4755]: I0224 10:00:47.187653 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 10:00:47 crc kubenswrapper[4755]: I0224 10:00:47.195632 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 10:00:47 crc kubenswrapper[4755]: I0224 10:00:47.293629 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 10:00:47 crc kubenswrapper[4755]: I0224 10:00:47.417552 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 10:00:47 crc kubenswrapper[4755]: I0224 10:00:47.580427 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 10:00:47 crc kubenswrapper[4755]: I0224 10:00:47.596893 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 10:00:47 crc kubenswrapper[4755]: I0224 10:00:47.643115 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 24 10:00:47 crc kubenswrapper[4755]: I0224 10:00:47.788707 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 10:00:47 crc kubenswrapper[4755]: I0224 10:00:47.814295 4755 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 24 10:00:47 crc kubenswrapper[4755]: I0224 10:00:47.816361 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 24 10:00:47 crc kubenswrapper[4755]: I0224 10:00:47.985983 4755 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.032760 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.045543 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.200490 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.316415 4755 scope.go:117] "RemoveContainer" containerID="4ca628c241a9cabf8b8da1fae156cc703252c3e22e07b4073f1703cf4cd9ea81" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.336744 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.372754 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.411798 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.421546 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.468878 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.645082 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.684003 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.691851 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.725384 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.732105 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.944671 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.945015 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0cd584fb3bf4628ece4f261303c7e1cd6dda7a686f2a18d68437f2bfc60958c5"} Feb 24 10:00:48 crc kubenswrapper[4755]: I0224 10:00:48.992503 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.072690 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.139609 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.207199 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.262184 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.301269 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.442334 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.443475 4755 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.444439 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" podStartSLOduration=41.444424789 podStartE2EDuration="41.444424789s" podCreationTimestamp="2026-02-24 10:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:00:29.446786583 +0000 UTC m=+333.903309146" watchObservedRunningTime="2026-02-24 10:00:49.444424789 +0000 UTC m=+353.900947342" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.447686 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8c9cf5678-78bjp" podStartSLOduration=41.447678015 podStartE2EDuration="41.447678015s" podCreationTimestamp="2026-02-24 10:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:00:29.785687714 +0000 UTC m=+334.242210257" watchObservedRunningTime="2026-02-24 10:00:49.447678015 +0000 UTC m=+353.904200568" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.447932 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-98t22" podStartSLOduration=302.447927112 podStartE2EDuration="5m2.447927112s" podCreationTimestamp="2026-02-24 09:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:00:32.841982239 +0000 UTC m=+337.298504792" watchObservedRunningTime="2026-02-24 10:00:49.447927112 +0000 UTC m=+353.904449665" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.448689 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-srwxs"] Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.448740 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.448761 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8c9cf5678-78bjp","openshift-multus/network-metrics-daemon-98t22"] Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.458189 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.478463 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.47843216 podStartE2EDuration="20.47843216s" podCreationTimestamp="2026-02-24 10:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:00:49.47133517 +0000 UTC m=+353.927857723" watchObservedRunningTime="2026-02-24 10:00:49.47843216 +0000 UTC m=+353.934954743" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.543109 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.597961 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.634422 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.644959 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.662242 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.674046 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.721590 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.724220 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.804269 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.864545 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.938608 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.941493 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.953260 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.954058 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.954161 4755 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="0cd584fb3bf4628ece4f261303c7e1cd6dda7a686f2a18d68437f2bfc60958c5" exitCode=255 Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.954645 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"0cd584fb3bf4628ece4f261303c7e1cd6dda7a686f2a18d68437f2bfc60958c5"} Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.954740 4755 scope.go:117] "RemoveContainer" containerID="4ca628c241a9cabf8b8da1fae156cc703252c3e22e07b4073f1703cf4cd9ea81" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.955962 4755 scope.go:117] "RemoveContainer" containerID="0cd584fb3bf4628ece4f261303c7e1cd6dda7a686f2a18d68437f2bfc60958c5" Feb 24 10:00:49 crc kubenswrapper[4755]: E0224 10:00:49.956591 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 10:00:49 crc kubenswrapper[4755]: I0224 10:00:49.976754 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 10:00:50 crc kubenswrapper[4755]: I0224 10:00:50.027161 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 24 10:00:50 crc kubenswrapper[4755]: I0224 10:00:50.150720 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 10:00:50 crc kubenswrapper[4755]: I0224 10:00:50.200466 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 24 10:00:50 crc kubenswrapper[4755]: I0224 10:00:50.221525 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 10:00:50 crc kubenswrapper[4755]: I0224 10:00:50.231189 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 10:00:50 crc kubenswrapper[4755]: I0224 10:00:50.285302 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 10:00:50 crc kubenswrapper[4755]: I0224 10:00:50.328011 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2317257d-494c-48b7-a69c-013e8b1d7d81" path="/var/lib/kubelet/pods/2317257d-494c-48b7-a69c-013e8b1d7d81/volumes" Feb 24 10:00:50 crc kubenswrapper[4755]: I0224 10:00:50.430703 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 10:00:50 crc kubenswrapper[4755]: I0224 10:00:50.473927 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 10:00:50 crc kubenswrapper[4755]: I0224 10:00:50.500409 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 10:00:50 crc kubenswrapper[4755]: I0224 10:00:50.549780 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 10:00:50 crc kubenswrapper[4755]: I0224 10:00:50.674377 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 24 10:00:50 crc kubenswrapper[4755]: I0224 10:00:50.844933 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 10:00:50 crc kubenswrapper[4755]: I0224 10:00:50.965030 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.010035 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.048994 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.234013 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.243615 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.435000 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.506057 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.528587 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.538177 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.573300 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.673428 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.741127 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.781249 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.805578 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.841441 4755 patch_prober.go:28] interesting pod/route-controller-manager-577b4cdcd5-cr4b5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.841505 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" podUID="4dc4365a-e8e3-4a98-b6e7-db9da8d7c4e9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.939692 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 10:00:51 crc kubenswrapper[4755]: I0224 10:00:51.979686 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.020736 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb"] Feb 24 10:00:52 crc kubenswrapper[4755]: E0224 10:00:52.020943 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2317257d-494c-48b7-a69c-013e8b1d7d81" containerName="oauth-openshift" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.020957 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="2317257d-494c-48b7-a69c-013e8b1d7d81" containerName="oauth-openshift" Feb 24 10:00:52 crc kubenswrapper[4755]: E0224 10:00:52.020973 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" containerName="installer" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.020981 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" containerName="installer" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.021136 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="2317257d-494c-48b7-a69c-013e8b1d7d81" containerName="oauth-openshift" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.021168 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="48e88ab9-cdeb-458e-b1ff-5fc96d923829" containerName="installer" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.021679 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.024018 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.024383 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.025355 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.025600 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.028035 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.028452 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.029387 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.030062 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.030219 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.030313 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.030625 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.030719 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.040467 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb"] Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.043603 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.048507 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.063868 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.074476 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.136958 4755 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.137307 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://58c026702587702fcdb5a43e2e28daa6ac14ac6ea385c7d176691baf9104bcaf" gracePeriod=5 Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.176376 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.196693 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-session\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.196761 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.196814 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.196846 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.196882 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.196920 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.196987 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-audit-policies\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.197021 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.197052 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-audit-dir\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.197127 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.197173 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.197201 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-user-template-error\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.197250 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-user-template-login\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.197285 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snjg9\" (UniqueName: \"kubernetes.io/projected/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-kube-api-access-snjg9\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.297939 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.297983 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-audit-policies\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.298003 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.298024 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-audit-dir\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.298044 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.298088 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-user-template-error\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.298108 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.298146 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-user-template-login\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.298177 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snjg9\" (UniqueName: \"kubernetes.io/projected/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-kube-api-access-snjg9\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.298199 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-session\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.298247 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.298285 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.298311 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.298306 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-audit-dir\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.298339 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.299248 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-audit-policies\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.299348 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.299810 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.300512 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.306172 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.306417 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.306454 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.306761 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-session\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.310365 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-user-template-error\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.315332 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.315671 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-user-template-login\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.318954 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.322952 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snjg9\" (UniqueName: \"kubernetes.io/projected/4bb2f62a-6ce1-463c-86a4-1757e5f72e2b-kube-api-access-snjg9\") pod \"oauth-openshift-7cd8f88d7f-wf9zb\" (UID: \"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.343263 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.422997 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.444613 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.458639 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.474962 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.502084 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.502209 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.543721 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.704236 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.772603 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.794177 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb"] Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.855335 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.855527 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.861706 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.861709 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.881315 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.902651 4755 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.916989 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.942665 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 10:00:52 crc kubenswrapper[4755]: I0224 10:00:52.978180 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" event={"ID":"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b","Type":"ContainerStarted","Data":"5b5a0a4685cdda2bb9eea4a45165b263f187da2fc95e509756d6df264823653e"} Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.110958 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.147825 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.159769 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.246780 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.256936 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.380058 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.382407 4755 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.420866 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.471684 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.541647 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.550542 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.608727 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.739987 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.996195 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" event={"ID":"4bb2f62a-6ce1-463c-86a4-1757e5f72e2b","Type":"ContainerStarted","Data":"17cb803985212eee24c834c679d14c45c48f78689458f79199b644f78a48df87"} Feb 24 10:00:53 crc kubenswrapper[4755]: I0224 10:00:53.997589 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:54 crc kubenswrapper[4755]: I0224 10:00:54.007422 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" Feb 24 10:00:54 crc kubenswrapper[4755]: I0224 10:00:54.028844 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7cd8f88d7f-wf9zb" podStartSLOduration=63.028821915 podStartE2EDuration="1m3.028821915s" podCreationTimestamp="2026-02-24 09:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:00:54.025547977 +0000 UTC m=+358.482070530" watchObservedRunningTime="2026-02-24 10:00:54.028821915 +0000 UTC m=+358.485344478" Feb 24 10:00:54 crc kubenswrapper[4755]: I0224 10:00:54.093437 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 10:00:54 crc kubenswrapper[4755]: I0224 10:00:54.104633 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 10:00:54 crc kubenswrapper[4755]: I0224 10:00:54.259282 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 10:00:54 crc kubenswrapper[4755]: I0224 10:00:54.439929 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 10:00:54 crc kubenswrapper[4755]: I0224 10:00:54.592534 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 10:00:54 crc kubenswrapper[4755]: I0224 10:00:54.659421 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 10:00:54 crc kubenswrapper[4755]: I0224 10:00:54.691134 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 10:00:54 crc kubenswrapper[4755]: I0224 10:00:54.738475 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 24 10:00:54 crc kubenswrapper[4755]: I0224 10:00:54.879106 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 10:00:54 crc kubenswrapper[4755]: I0224 10:00:54.881631 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 24 10:00:55 crc kubenswrapper[4755]: I0224 10:00:55.031047 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 10:00:55 crc kubenswrapper[4755]: I0224 10:00:55.041231 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 10:00:55 crc kubenswrapper[4755]: I0224 10:00:55.263456 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 24 10:00:55 crc kubenswrapper[4755]: I0224 10:00:55.306199 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 10:00:55 crc kubenswrapper[4755]: I0224 10:00:55.545440 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 24 10:00:55 crc kubenswrapper[4755]: I0224 10:00:55.621057 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 10:00:55 crc kubenswrapper[4755]: I0224 10:00:55.629150 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 10:00:55 crc kubenswrapper[4755]: I0224 10:00:55.640490 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 24 10:00:55 crc kubenswrapper[4755]: I0224 10:00:55.665182 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 10:00:55 crc kubenswrapper[4755]: I0224 10:00:55.720178 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 10:00:55 crc kubenswrapper[4755]: I0224 10:00:55.763583 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 10:00:55 crc kubenswrapper[4755]: I0224 10:00:55.894173 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 10:00:55 crc kubenswrapper[4755]: I0224 10:00:55.933545 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 10:00:56 crc kubenswrapper[4755]: I0224 10:00:56.081794 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 10:00:56 crc kubenswrapper[4755]: I0224 10:00:56.149373 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 10:00:56 crc kubenswrapper[4755]: I0224 10:00:56.507056 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 10:00:56 crc kubenswrapper[4755]: I0224 10:00:56.527804 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 10:00:56 crc kubenswrapper[4755]: I0224 10:00:56.599790 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 10:00:56 crc kubenswrapper[4755]: I0224 10:00:56.650543 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 10:00:56 crc kubenswrapper[4755]: I0224 10:00:56.655977 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 10:00:56 crc kubenswrapper[4755]: I0224 10:00:56.696916 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 10:00:56 crc kubenswrapper[4755]: I0224 10:00:56.813006 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.274653 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.317769 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.413516 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.416652 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.466814 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.736548 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.736653 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.861766 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.875378 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.875469 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.875581 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.875618 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.875654 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.875983 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.876018 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.876102 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.876074 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.894482 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.976949 4755 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.977008 4755 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.977026 4755 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.977043 4755 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:57 crc kubenswrapper[4755]: I0224 10:00:57.977060 4755 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 24 10:00:58 crc kubenswrapper[4755]: I0224 10:00:58.024990 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 24 10:00:58 crc kubenswrapper[4755]: I0224 10:00:58.025122 4755 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="58c026702587702fcdb5a43e2e28daa6ac14ac6ea385c7d176691baf9104bcaf" exitCode=137 Feb 24 10:00:58 crc kubenswrapper[4755]: I0224 10:00:58.025199 4755 scope.go:117] "RemoveContainer" containerID="58c026702587702fcdb5a43e2e28daa6ac14ac6ea385c7d176691baf9104bcaf" Feb 24 10:00:58 crc kubenswrapper[4755]: I0224 10:00:58.025319 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 10:00:58 crc kubenswrapper[4755]: I0224 10:00:58.050830 4755 scope.go:117] "RemoveContainer" containerID="58c026702587702fcdb5a43e2e28daa6ac14ac6ea385c7d176691baf9104bcaf" Feb 24 10:00:58 crc kubenswrapper[4755]: E0224 10:00:58.051285 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58c026702587702fcdb5a43e2e28daa6ac14ac6ea385c7d176691baf9104bcaf\": container with ID starting with 58c026702587702fcdb5a43e2e28daa6ac14ac6ea385c7d176691baf9104bcaf not found: ID does not exist" containerID="58c026702587702fcdb5a43e2e28daa6ac14ac6ea385c7d176691baf9104bcaf" Feb 24 10:00:58 crc kubenswrapper[4755]: I0224 10:00:58.051322 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58c026702587702fcdb5a43e2e28daa6ac14ac6ea385c7d176691baf9104bcaf"} err="failed to get container status \"58c026702587702fcdb5a43e2e28daa6ac14ac6ea385c7d176691baf9104bcaf\": rpc error: code = NotFound desc = could not find container \"58c026702587702fcdb5a43e2e28daa6ac14ac6ea385c7d176691baf9104bcaf\": container with ID starting with 58c026702587702fcdb5a43e2e28daa6ac14ac6ea385c7d176691baf9104bcaf not found: ID does not exist" Feb 24 10:00:58 crc kubenswrapper[4755]: I0224 10:00:58.312935 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 10:00:58 crc kubenswrapper[4755]: I0224 10:00:58.328498 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 24 10:00:58 crc kubenswrapper[4755]: I0224 10:00:58.778063 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 10:00:58 crc kubenswrapper[4755]: I0224 10:00:58.921234 4755 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 10:00:59 crc kubenswrapper[4755]: I0224 10:00:59.189706 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39508: no serving certificate available for the kubelet" Feb 24 10:01:00 crc kubenswrapper[4755]: I0224 10:01:00.847390 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-577b4cdcd5-cr4b5" Feb 24 10:01:03 crc kubenswrapper[4755]: I0224 10:01:03.316040 4755 scope.go:117] "RemoveContainer" containerID="0cd584fb3bf4628ece4f261303c7e1cd6dda7a686f2a18d68437f2bfc60958c5" Feb 24 10:01:03 crc kubenswrapper[4755]: E0224 10:01:03.316570 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 10:01:08 crc kubenswrapper[4755]: I0224 10:01:08.365958 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59602: no serving certificate available for the kubelet" Feb 24 10:01:11 crc kubenswrapper[4755]: I0224 10:01:11.654751 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 10:01:17 crc kubenswrapper[4755]: I0224 10:01:17.164835 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 24 10:01:17 crc kubenswrapper[4755]: I0224 10:01:17.168420 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 10:01:17 crc kubenswrapper[4755]: I0224 10:01:17.169314 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 24 10:01:17 crc kubenswrapper[4755]: I0224 10:01:17.169385 4755 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7442f602fa182fa577c6a322abb3a2faffe02dd41d56844566045f692e2eb34b" exitCode=137 Feb 24 10:01:17 crc kubenswrapper[4755]: I0224 10:01:17.169427 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7442f602fa182fa577c6a322abb3a2faffe02dd41d56844566045f692e2eb34b"} Feb 24 10:01:17 crc kubenswrapper[4755]: I0224 10:01:17.169471 4755 scope.go:117] "RemoveContainer" containerID="56c6dc1f59bd27418f47c0102400b6607ae9f9baf7011ce5d8c06c5b87387d90" Feb 24 10:01:18 crc kubenswrapper[4755]: I0224 10:01:18.177844 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 24 10:01:18 crc kubenswrapper[4755]: I0224 10:01:18.180790 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 10:01:18 crc kubenswrapper[4755]: I0224 10:01:18.180897 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1ba31b724f54a87dc12ccf50ecbe1c01d85e52c770ebaa40752ad9f9e05d36a5"} Feb 24 10:01:18 crc kubenswrapper[4755]: I0224 10:01:18.316883 4755 scope.go:117] "RemoveContainer" containerID="0cd584fb3bf4628ece4f261303c7e1cd6dda7a686f2a18d68437f2bfc60958c5" Feb 24 10:01:19 crc kubenswrapper[4755]: I0224 10:01:19.189886 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Feb 24 10:01:19 crc kubenswrapper[4755]: I0224 10:01:19.190031 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"99f49120bb2974bdf5c8071e1d367b794587410299d567622b8d479a4839645f"} Feb 24 10:01:20 crc kubenswrapper[4755]: I0224 10:01:20.198454 4755 generic.go:334] "Generic (PLEG): container finished" podID="e105e7e0-0046-47ca-8a73-c27e385a0301" containerID="3135b9ca423f992987cdab3ae9179ef151afbc6ec38de245af48f4c5b16f3b94" exitCode=0 Feb 24 10:01:20 crc kubenswrapper[4755]: I0224 10:01:20.198893 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" event={"ID":"e105e7e0-0046-47ca-8a73-c27e385a0301","Type":"ContainerDied","Data":"3135b9ca423f992987cdab3ae9179ef151afbc6ec38de245af48f4c5b16f3b94"} Feb 24 10:01:20 crc kubenswrapper[4755]: I0224 10:01:20.199723 4755 scope.go:117] "RemoveContainer" containerID="3135b9ca423f992987cdab3ae9179ef151afbc6ec38de245af48f4c5b16f3b94" Feb 24 10:01:21 crc kubenswrapper[4755]: I0224 10:01:21.209137 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" event={"ID":"e105e7e0-0046-47ca-8a73-c27e385a0301","Type":"ContainerStarted","Data":"4af76b4808f7dc5886889bfb4a0d75d98cf9c31c61d2f3d762f4b58701ef54ad"} Feb 24 10:01:21 crc kubenswrapper[4755]: I0224 10:01:21.210309 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 10:01:21 crc kubenswrapper[4755]: I0224 10:01:21.216933 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 10:01:25 crc kubenswrapper[4755]: I0224 10:01:25.385708 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 10:01:26 crc kubenswrapper[4755]: I0224 10:01:26.451774 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 10:01:26 crc kubenswrapper[4755]: I0224 10:01:26.457845 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 10:01:27 crc kubenswrapper[4755]: I0224 10:01:27.249529 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 10:01:51 crc kubenswrapper[4755]: I0224 10:01:51.695651 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:01:51 crc kubenswrapper[4755]: I0224 10:01:51.696364 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:01:56 crc kubenswrapper[4755]: I0224 10:01:56.629323 4755 scope.go:117] "RemoveContainer" containerID="338e87cd75fea385cc01992029d213cec4632d99ac560e89702d8c1bdae137e9" Feb 24 10:01:56 crc kubenswrapper[4755]: I0224 10:01:56.652815 4755 scope.go:117] "RemoveContainer" containerID="2b4d8e6752e71eb6372d4542eedad830cc5a93b7b34ad9e4f67172449260530f" Feb 24 10:01:56 crc kubenswrapper[4755]: I0224 10:01:56.665174 4755 scope.go:117] "RemoveContainer" containerID="632bc5f63548254ae85a2b435a207ea713738b00e95e93a5e5414992af3ee472" Feb 24 10:01:56 crc kubenswrapper[4755]: I0224 10:01:56.681695 4755 scope.go:117] "RemoveContainer" containerID="3c4adad584c4eb69fd5a08bd272c691ef0cdf8fbb05165c0b306cf5f73bbfea0" Feb 24 10:01:56 crc kubenswrapper[4755]: I0224 10:01:56.697504 4755 scope.go:117] "RemoveContainer" containerID="13274b8b6846e31f071ed4c46ab87fc53f1bd824a36c43dd7f3b5a64d450964e" Feb 24 10:02:21 crc kubenswrapper[4755]: I0224 10:02:21.695052 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:02:21 crc kubenswrapper[4755]: I0224 10:02:21.696000 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.221611 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bj8q9"] Feb 24 10:02:27 crc kubenswrapper[4755]: E0224 10:02:27.222478 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.222494 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.222599 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.223020 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.286334 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bj8q9"] Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.376323 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8effc883-ee29-4fbf-84f3-ca4305be75f3-registry-certificates\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.376385 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8effc883-ee29-4fbf-84f3-ca4305be75f3-trusted-ca\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.376438 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.376463 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp968\" (UniqueName: \"kubernetes.io/projected/8effc883-ee29-4fbf-84f3-ca4305be75f3-kube-api-access-xp968\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.376493 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8effc883-ee29-4fbf-84f3-ca4305be75f3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.376520 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8effc883-ee29-4fbf-84f3-ca4305be75f3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.376647 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8effc883-ee29-4fbf-84f3-ca4305be75f3-registry-tls\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.376855 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8effc883-ee29-4fbf-84f3-ca4305be75f3-bound-sa-token\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.415844 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.477868 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8effc883-ee29-4fbf-84f3-ca4305be75f3-bound-sa-token\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.477929 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8effc883-ee29-4fbf-84f3-ca4305be75f3-registry-certificates\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.477962 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8effc883-ee29-4fbf-84f3-ca4305be75f3-trusted-ca\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.478006 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp968\" (UniqueName: \"kubernetes.io/projected/8effc883-ee29-4fbf-84f3-ca4305be75f3-kube-api-access-xp968\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.478039 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8effc883-ee29-4fbf-84f3-ca4305be75f3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.478079 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8effc883-ee29-4fbf-84f3-ca4305be75f3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.478106 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8effc883-ee29-4fbf-84f3-ca4305be75f3-registry-tls\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.479816 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8effc883-ee29-4fbf-84f3-ca4305be75f3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.481488 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8effc883-ee29-4fbf-84f3-ca4305be75f3-trusted-ca\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.483376 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8effc883-ee29-4fbf-84f3-ca4305be75f3-registry-certificates\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.485208 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8effc883-ee29-4fbf-84f3-ca4305be75f3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.485429 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8effc883-ee29-4fbf-84f3-ca4305be75f3-registry-tls\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.497579 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp968\" (UniqueName: \"kubernetes.io/projected/8effc883-ee29-4fbf-84f3-ca4305be75f3-kube-api-access-xp968\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.507561 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8effc883-ee29-4fbf-84f3-ca4305be75f3-bound-sa-token\") pod \"image-registry-66df7c8f76-bj8q9\" (UID: \"8effc883-ee29-4fbf-84f3-ca4305be75f3\") " pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.541611 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:27 crc kubenswrapper[4755]: I0224 10:02:27.812799 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bj8q9"] Feb 24 10:02:28 crc kubenswrapper[4755]: I0224 10:02:28.627575 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" event={"ID":"8effc883-ee29-4fbf-84f3-ca4305be75f3","Type":"ContainerStarted","Data":"ddd602fd389cd223b4ef7165ad0d7df8cb85ddc208e3d1854cc18b326398f295"} Feb 24 10:02:28 crc kubenswrapper[4755]: I0224 10:02:28.627999 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" event={"ID":"8effc883-ee29-4fbf-84f3-ca4305be75f3","Type":"ContainerStarted","Data":"7787ab4a7990003ec4bf3bd8cee7988fc6ffd4d84ed80bdc6cdfbf25b366816d"} Feb 24 10:02:28 crc kubenswrapper[4755]: I0224 10:02:28.628020 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:28 crc kubenswrapper[4755]: I0224 10:02:28.647829 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" podStartSLOduration=1.647804511 podStartE2EDuration="1.647804511s" podCreationTimestamp="2026-02-24 10:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:02:28.644375569 +0000 UTC m=+453.100898152" watchObservedRunningTime="2026-02-24 10:02:28.647804511 +0000 UTC m=+453.104327084" Feb 24 10:02:29 crc kubenswrapper[4755]: I0224 10:02:29.824631 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bkgvw"] Feb 24 10:02:29 crc kubenswrapper[4755]: I0224 10:02:29.825388 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bkgvw" podUID="47b5e8fc-e79e-4cd2-906b-0d8116e4d608" containerName="registry-server" containerID="cri-o://7c4052db91abf87984da7cb7d75a28ab8d421f9d0eb6df3cfb2e27c96ad2627b" gracePeriod=30 Feb 24 10:02:29 crc kubenswrapper[4755]: I0224 10:02:29.845845 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mqjwv"] Feb 24 10:02:29 crc kubenswrapper[4755]: I0224 10:02:29.846195 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mqjwv" podUID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" containerName="registry-server" containerID="cri-o://d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4" gracePeriod=30 Feb 24 10:02:29 crc kubenswrapper[4755]: I0224 10:02:29.862475 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tp8zp"] Feb 24 10:02:29 crc kubenswrapper[4755]: I0224 10:02:29.862814 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" podUID="e105e7e0-0046-47ca-8a73-c27e385a0301" containerName="marketplace-operator" containerID="cri-o://4af76b4808f7dc5886889bfb4a0d75d98cf9c31c61d2f3d762f4b58701ef54ad" gracePeriod=30 Feb 24 10:02:29 crc kubenswrapper[4755]: I0224 10:02:29.869321 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gt9z"] Feb 24 10:02:29 crc kubenswrapper[4755]: I0224 10:02:29.869588 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6gt9z" podUID="2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" containerName="registry-server" containerID="cri-o://ec36d0d4564973440b1cc7faf41dd06f37e4ea1f41a7ceff9c14832c24043f49" gracePeriod=30 Feb 24 10:02:29 crc kubenswrapper[4755]: I0224 10:02:29.878960 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2gqk6"] Feb 24 10:02:29 crc kubenswrapper[4755]: I0224 10:02:29.879265 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2gqk6" podUID="9fce8bb3-2fc9-496a-b0c0-873427d27571" containerName="registry-server" containerID="cri-o://500c85f84af8fb83736af6b19c0b282434b665f5e78e7e2270b2ed50bdf6cb22" gracePeriod=30 Feb 24 10:02:29 crc kubenswrapper[4755]: I0224 10:02:29.884860 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhdbm"] Feb 24 10:02:29 crc kubenswrapper[4755]: I0224 10:02:29.885643 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" Feb 24 10:02:29 crc kubenswrapper[4755]: I0224 10:02:29.894038 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhdbm"] Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.042690 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0157f55-02e7-4faf-bbce-b5e2b41cfda9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nhdbm\" (UID: \"b0157f55-02e7-4faf-bbce-b5e2b41cfda9\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.042734 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0157f55-02e7-4faf-bbce-b5e2b41cfda9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nhdbm\" (UID: \"b0157f55-02e7-4faf-bbce-b5e2b41cfda9\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.042783 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr6b6\" (UniqueName: \"kubernetes.io/projected/b0157f55-02e7-4faf-bbce-b5e2b41cfda9-kube-api-access-tr6b6\") pod \"marketplace-operator-79b997595-nhdbm\" (UID: \"b0157f55-02e7-4faf-bbce-b5e2b41cfda9\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.144440 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0157f55-02e7-4faf-bbce-b5e2b41cfda9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nhdbm\" (UID: \"b0157f55-02e7-4faf-bbce-b5e2b41cfda9\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.144509 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr6b6\" (UniqueName: \"kubernetes.io/projected/b0157f55-02e7-4faf-bbce-b5e2b41cfda9-kube-api-access-tr6b6\") pod \"marketplace-operator-79b997595-nhdbm\" (UID: \"b0157f55-02e7-4faf-bbce-b5e2b41cfda9\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.144569 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0157f55-02e7-4faf-bbce-b5e2b41cfda9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nhdbm\" (UID: \"b0157f55-02e7-4faf-bbce-b5e2b41cfda9\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.145710 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0157f55-02e7-4faf-bbce-b5e2b41cfda9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nhdbm\" (UID: \"b0157f55-02e7-4faf-bbce-b5e2b41cfda9\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.157986 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0157f55-02e7-4faf-bbce-b5e2b41cfda9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nhdbm\" (UID: \"b0157f55-02e7-4faf-bbce-b5e2b41cfda9\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.161293 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr6b6\" (UniqueName: \"kubernetes.io/projected/b0157f55-02e7-4faf-bbce-b5e2b41cfda9-kube-api-access-tr6b6\") pod \"marketplace-operator-79b997595-nhdbm\" (UID: \"b0157f55-02e7-4faf-bbce-b5e2b41cfda9\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.282651 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4 is running failed: container process not found" containerID="d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4" cmd=["grpc_health_probe","-addr=:50051"] Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.283230 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4 is running failed: container process not found" containerID="d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4" cmd=["grpc_health_probe","-addr=:50051"] Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.283606 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4 is running failed: container process not found" containerID="d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4" cmd=["grpc_health_probe","-addr=:50051"] Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.283659 4755 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-mqjwv" podUID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" containerName="registry-server" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.308003 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.314248 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.335527 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.381025 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lktzd\" (UniqueName: \"kubernetes.io/projected/9fce8bb3-2fc9-496a-b0c0-873427d27571-kube-api-access-lktzd\") pod \"9fce8bb3-2fc9-496a-b0c0-873427d27571\" (UID: \"9fce8bb3-2fc9-496a-b0c0-873427d27571\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.381124 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fce8bb3-2fc9-496a-b0c0-873427d27571-utilities\") pod \"9fce8bb3-2fc9-496a-b0c0-873427d27571\" (UID: \"9fce8bb3-2fc9-496a-b0c0-873427d27571\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.381176 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkgxx\" (UniqueName: \"kubernetes.io/projected/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-kube-api-access-hkgxx\") pod \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\" (UID: \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.381199 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-utilities\") pod \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\" (UID: \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.382608 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fce8bb3-2fc9-496a-b0c0-873427d27571-utilities" (OuterVolumeSpecName: "utilities") pod "9fce8bb3-2fc9-496a-b0c0-873427d27571" (UID: "9fce8bb3-2fc9-496a-b0c0-873427d27571"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.382848 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-catalog-content\") pod \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\" (UID: \"47b5e8fc-e79e-4cd2-906b-0d8116e4d608\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.382946 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fce8bb3-2fc9-496a-b0c0-873427d27571-catalog-content\") pod \"9fce8bb3-2fc9-496a-b0c0-873427d27571\" (UID: \"9fce8bb3-2fc9-496a-b0c0-873427d27571\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.383358 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fce8bb3-2fc9-496a-b0c0-873427d27571-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.383642 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-utilities" (OuterVolumeSpecName: "utilities") pod "47b5e8fc-e79e-4cd2-906b-0d8116e4d608" (UID: "47b5e8fc-e79e-4cd2-906b-0d8116e4d608"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.385571 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fce8bb3-2fc9-496a-b0c0-873427d27571-kube-api-access-lktzd" (OuterVolumeSpecName: "kube-api-access-lktzd") pod "9fce8bb3-2fc9-496a-b0c0-873427d27571" (UID: "9fce8bb3-2fc9-496a-b0c0-873427d27571"). InnerVolumeSpecName "kube-api-access-lktzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.397388 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-kube-api-access-hkgxx" (OuterVolumeSpecName: "kube-api-access-hkgxx") pod "47b5e8fc-e79e-4cd2-906b-0d8116e4d608" (UID: "47b5e8fc-e79e-4cd2-906b-0d8116e4d608"). InnerVolumeSpecName "kube-api-access-hkgxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.405238 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.437521 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.439158 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqjwv" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.451280 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47b5e8fc-e79e-4cd2-906b-0d8116e4d608" (UID: "47b5e8fc-e79e-4cd2-906b-0d8116e4d608"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.483618 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-utilities\") pod \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\" (UID: \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.483664 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-catalog-content\") pod \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\" (UID: \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.483683 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-catalog-content\") pod \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\" (UID: \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.483710 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e105e7e0-0046-47ca-8a73-c27e385a0301-marketplace-operator-metrics\") pod \"e105e7e0-0046-47ca-8a73-c27e385a0301\" (UID: \"e105e7e0-0046-47ca-8a73-c27e385a0301\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.483729 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-utilities\") pod \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\" (UID: \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.483772 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lkvm\" (UniqueName: \"kubernetes.io/projected/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-kube-api-access-5lkvm\") pod \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\" (UID: \"f4344c16-3181-42d3-9d94-6cccd3fe8cc0\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.483791 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e105e7e0-0046-47ca-8a73-c27e385a0301-marketplace-trusted-ca\") pod \"e105e7e0-0046-47ca-8a73-c27e385a0301\" (UID: \"e105e7e0-0046-47ca-8a73-c27e385a0301\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.483819 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzpcl\" (UniqueName: \"kubernetes.io/projected/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-kube-api-access-qzpcl\") pod \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\" (UID: \"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.483837 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw8tl\" (UniqueName: \"kubernetes.io/projected/e105e7e0-0046-47ca-8a73-c27e385a0301-kube-api-access-sw8tl\") pod \"e105e7e0-0046-47ca-8a73-c27e385a0301\" (UID: \"e105e7e0-0046-47ca-8a73-c27e385a0301\") " Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.484012 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkgxx\" (UniqueName: \"kubernetes.io/projected/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-kube-api-access-hkgxx\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.484023 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.484032 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b5e8fc-e79e-4cd2-906b-0d8116e4d608-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.484040 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lktzd\" (UniqueName: \"kubernetes.io/projected/9fce8bb3-2fc9-496a-b0c0-873427d27571-kube-api-access-lktzd\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.484309 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-utilities" (OuterVolumeSpecName: "utilities") pod "2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" (UID: "2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.484913 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e105e7e0-0046-47ca-8a73-c27e385a0301-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e105e7e0-0046-47ca-8a73-c27e385a0301" (UID: "e105e7e0-0046-47ca-8a73-c27e385a0301"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.485304 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-utilities" (OuterVolumeSpecName: "utilities") pod "f4344c16-3181-42d3-9d94-6cccd3fe8cc0" (UID: "f4344c16-3181-42d3-9d94-6cccd3fe8cc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.486781 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e105e7e0-0046-47ca-8a73-c27e385a0301-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e105e7e0-0046-47ca-8a73-c27e385a0301" (UID: "e105e7e0-0046-47ca-8a73-c27e385a0301"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.487579 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e105e7e0-0046-47ca-8a73-c27e385a0301-kube-api-access-sw8tl" (OuterVolumeSpecName: "kube-api-access-sw8tl") pod "e105e7e0-0046-47ca-8a73-c27e385a0301" (UID: "e105e7e0-0046-47ca-8a73-c27e385a0301"). InnerVolumeSpecName "kube-api-access-sw8tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.489410 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-kube-api-access-5lkvm" (OuterVolumeSpecName: "kube-api-access-5lkvm") pod "f4344c16-3181-42d3-9d94-6cccd3fe8cc0" (UID: "f4344c16-3181-42d3-9d94-6cccd3fe8cc0"). InnerVolumeSpecName "kube-api-access-5lkvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.495301 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-kube-api-access-qzpcl" (OuterVolumeSpecName: "kube-api-access-qzpcl") pod "2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" (UID: "2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1"). InnerVolumeSpecName "kube-api-access-qzpcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.517320 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" (UID: "2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.532581 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4344c16-3181-42d3-9d94-6cccd3fe8cc0" (UID: "f4344c16-3181-42d3-9d94-6cccd3fe8cc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.553654 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fce8bb3-2fc9-496a-b0c0-873427d27571-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fce8bb3-2fc9-496a-b0c0-873427d27571" (UID: "9fce8bb3-2fc9-496a-b0c0-873427d27571"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.585646 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lkvm\" (UniqueName: \"kubernetes.io/projected/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-kube-api-access-5lkvm\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.585685 4755 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e105e7e0-0046-47ca-8a73-c27e385a0301-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.585698 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzpcl\" (UniqueName: \"kubernetes.io/projected/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-kube-api-access-qzpcl\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.585712 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw8tl\" (UniqueName: \"kubernetes.io/projected/e105e7e0-0046-47ca-8a73-c27e385a0301-kube-api-access-sw8tl\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.585724 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.585736 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fce8bb3-2fc9-496a-b0c0-873427d27571-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.585746 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.585756 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.585767 4755 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e105e7e0-0046-47ca-8a73-c27e385a0301-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.585779 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4344c16-3181-42d3-9d94-6cccd3fe8cc0-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.648666 4755 generic.go:334] "Generic (PLEG): container finished" podID="47b5e8fc-e79e-4cd2-906b-0d8116e4d608" containerID="7c4052db91abf87984da7cb7d75a28ab8d421f9d0eb6df3cfb2e27c96ad2627b" exitCode=0 Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.648732 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkgvw" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.648748 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkgvw" event={"ID":"47b5e8fc-e79e-4cd2-906b-0d8116e4d608","Type":"ContainerDied","Data":"7c4052db91abf87984da7cb7d75a28ab8d421f9d0eb6df3cfb2e27c96ad2627b"} Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.649131 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkgvw" event={"ID":"47b5e8fc-e79e-4cd2-906b-0d8116e4d608","Type":"ContainerDied","Data":"0323e89ad66aa6bacfaa80a8c98fd295c001b29d03f78d386488e1e3616c1656"} Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.649163 4755 scope.go:117] "RemoveContainer" containerID="7c4052db91abf87984da7cb7d75a28ab8d421f9d0eb6df3cfb2e27c96ad2627b" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.651961 4755 generic.go:334] "Generic (PLEG): container finished" podID="e105e7e0-0046-47ca-8a73-c27e385a0301" containerID="4af76b4808f7dc5886889bfb4a0d75d98cf9c31c61d2f3d762f4b58701ef54ad" exitCode=0 Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.651998 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" event={"ID":"e105e7e0-0046-47ca-8a73-c27e385a0301","Type":"ContainerDied","Data":"4af76b4808f7dc5886889bfb4a0d75d98cf9c31c61d2f3d762f4b58701ef54ad"} Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.652013 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" event={"ID":"e105e7e0-0046-47ca-8a73-c27e385a0301","Type":"ContainerDied","Data":"db40bc7cfd6b26702365a5dc871e266bf888e8022fe4cb050c704bea14228ee2"} Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.652126 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tp8zp" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.657948 4755 generic.go:334] "Generic (PLEG): container finished" podID="9fce8bb3-2fc9-496a-b0c0-873427d27571" containerID="500c85f84af8fb83736af6b19c0b282434b665f5e78e7e2270b2ed50bdf6cb22" exitCode=0 Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.658021 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gqk6" event={"ID":"9fce8bb3-2fc9-496a-b0c0-873427d27571","Type":"ContainerDied","Data":"500c85f84af8fb83736af6b19c0b282434b665f5e78e7e2270b2ed50bdf6cb22"} Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.658051 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gqk6" event={"ID":"9fce8bb3-2fc9-496a-b0c0-873427d27571","Type":"ContainerDied","Data":"a93faa99bb62bfadd21e1cc0b95a2a647557eaec759206d6cb3279486396b0d8"} Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.658097 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gqk6" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.662611 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" containerID="d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4" exitCode=0 Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.662691 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqjwv" event={"ID":"f4344c16-3181-42d3-9d94-6cccd3fe8cc0","Type":"ContainerDied","Data":"d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4"} Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.662722 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqjwv" event={"ID":"f4344c16-3181-42d3-9d94-6cccd3fe8cc0","Type":"ContainerDied","Data":"9eee4239dd13b83a9950cf46f606653a118c2c756e910f2cd0f851897bd6da1e"} Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.662811 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqjwv" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.665779 4755 generic.go:334] "Generic (PLEG): container finished" podID="2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" containerID="ec36d0d4564973440b1cc7faf41dd06f37e4ea1f41a7ceff9c14832c24043f49" exitCode=0 Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.665821 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gt9z" event={"ID":"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1","Type":"ContainerDied","Data":"ec36d0d4564973440b1cc7faf41dd06f37e4ea1f41a7ceff9c14832c24043f49"} Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.665847 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gt9z" event={"ID":"2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1","Type":"ContainerDied","Data":"b427d75a5ecc2958d521274fd5adeda692ac9c95afba17ac7482f5fe720ec6c7"} Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.665943 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gt9z" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.687271 4755 scope.go:117] "RemoveContainer" containerID="9999af3104d764c727499e2901ab07fe42a27bf7b84230b68ea3bf6f6310a7de" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.694646 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bkgvw"] Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.698929 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bkgvw"] Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.712379 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2gqk6"] Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.718639 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2gqk6"] Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.723331 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gt9z"] Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.726294 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gt9z"] Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.732257 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tp8zp"] Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.733349 4755 scope.go:117] "RemoveContainer" containerID="f093e7b54ec5a4516ebf73dacedaf9e1c0037e65f3f6f40c11d830a08e7a839c" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.735663 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tp8zp"] Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.750436 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mqjwv"] Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.753191 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mqjwv"] Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.761646 4755 scope.go:117] "RemoveContainer" containerID="7c4052db91abf87984da7cb7d75a28ab8d421f9d0eb6df3cfb2e27c96ad2627b" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.762632 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c4052db91abf87984da7cb7d75a28ab8d421f9d0eb6df3cfb2e27c96ad2627b\": container with ID starting with 7c4052db91abf87984da7cb7d75a28ab8d421f9d0eb6df3cfb2e27c96ad2627b not found: ID does not exist" containerID="7c4052db91abf87984da7cb7d75a28ab8d421f9d0eb6df3cfb2e27c96ad2627b" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.762677 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c4052db91abf87984da7cb7d75a28ab8d421f9d0eb6df3cfb2e27c96ad2627b"} err="failed to get container status \"7c4052db91abf87984da7cb7d75a28ab8d421f9d0eb6df3cfb2e27c96ad2627b\": rpc error: code = NotFound desc = could not find container \"7c4052db91abf87984da7cb7d75a28ab8d421f9d0eb6df3cfb2e27c96ad2627b\": container with ID starting with 7c4052db91abf87984da7cb7d75a28ab8d421f9d0eb6df3cfb2e27c96ad2627b not found: ID does not exist" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.762709 4755 scope.go:117] "RemoveContainer" containerID="9999af3104d764c727499e2901ab07fe42a27bf7b84230b68ea3bf6f6310a7de" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.765784 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9999af3104d764c727499e2901ab07fe42a27bf7b84230b68ea3bf6f6310a7de\": container with ID starting with 9999af3104d764c727499e2901ab07fe42a27bf7b84230b68ea3bf6f6310a7de not found: ID does not exist" containerID="9999af3104d764c727499e2901ab07fe42a27bf7b84230b68ea3bf6f6310a7de" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.765840 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9999af3104d764c727499e2901ab07fe42a27bf7b84230b68ea3bf6f6310a7de"} err="failed to get container status \"9999af3104d764c727499e2901ab07fe42a27bf7b84230b68ea3bf6f6310a7de\": rpc error: code = NotFound desc = could not find container \"9999af3104d764c727499e2901ab07fe42a27bf7b84230b68ea3bf6f6310a7de\": container with ID starting with 9999af3104d764c727499e2901ab07fe42a27bf7b84230b68ea3bf6f6310a7de not found: ID does not exist" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.765872 4755 scope.go:117] "RemoveContainer" containerID="f093e7b54ec5a4516ebf73dacedaf9e1c0037e65f3f6f40c11d830a08e7a839c" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.766297 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f093e7b54ec5a4516ebf73dacedaf9e1c0037e65f3f6f40c11d830a08e7a839c\": container with ID starting with f093e7b54ec5a4516ebf73dacedaf9e1c0037e65f3f6f40c11d830a08e7a839c not found: ID does not exist" containerID="f093e7b54ec5a4516ebf73dacedaf9e1c0037e65f3f6f40c11d830a08e7a839c" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.766322 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f093e7b54ec5a4516ebf73dacedaf9e1c0037e65f3f6f40c11d830a08e7a839c"} err="failed to get container status \"f093e7b54ec5a4516ebf73dacedaf9e1c0037e65f3f6f40c11d830a08e7a839c\": rpc error: code = NotFound desc = could not find container \"f093e7b54ec5a4516ebf73dacedaf9e1c0037e65f3f6f40c11d830a08e7a839c\": container with ID starting with f093e7b54ec5a4516ebf73dacedaf9e1c0037e65f3f6f40c11d830a08e7a839c not found: ID does not exist" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.766339 4755 scope.go:117] "RemoveContainer" containerID="4af76b4808f7dc5886889bfb4a0d75d98cf9c31c61d2f3d762f4b58701ef54ad" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.770079 4755 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode105e7e0_0046_47ca_8a73_c27e385a0301.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47b5e8fc_e79e_4cd2_906b_0d8116e4d608.slice/crio-0323e89ad66aa6bacfaa80a8c98fd295c001b29d03f78d386488e1e3616c1656\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4344c16_3181_42d3_9d94_6cccd3fe8cc0.slice/crio-9eee4239dd13b83a9950cf46f606653a118c2c756e910f2cd0f851897bd6da1e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fce8bb3_2fc9_496a_b0c0_873427d27571.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47b5e8fc_e79e_4cd2_906b_0d8116e4d608.slice\": RecentStats: unable to find data in memory cache]" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.780917 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhdbm"] Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.786749 4755 scope.go:117] "RemoveContainer" containerID="3135b9ca423f992987cdab3ae9179ef151afbc6ec38de245af48f4c5b16f3b94" Feb 24 10:02:30 crc kubenswrapper[4755]: W0224 10:02:30.788032 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0157f55_02e7_4faf_bbce_b5e2b41cfda9.slice/crio-6b54afc27e295784eca690833d25d2425f91fc996a5b257d25077faf26562ab2 WatchSource:0}: Error finding container 6b54afc27e295784eca690833d25d2425f91fc996a5b257d25077faf26562ab2: Status 404 returned error can't find the container with id 6b54afc27e295784eca690833d25d2425f91fc996a5b257d25077faf26562ab2 Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.802345 4755 scope.go:117] "RemoveContainer" containerID="4af76b4808f7dc5886889bfb4a0d75d98cf9c31c61d2f3d762f4b58701ef54ad" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.802651 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4af76b4808f7dc5886889bfb4a0d75d98cf9c31c61d2f3d762f4b58701ef54ad\": container with ID starting with 4af76b4808f7dc5886889bfb4a0d75d98cf9c31c61d2f3d762f4b58701ef54ad not found: ID does not exist" containerID="4af76b4808f7dc5886889bfb4a0d75d98cf9c31c61d2f3d762f4b58701ef54ad" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.802694 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af76b4808f7dc5886889bfb4a0d75d98cf9c31c61d2f3d762f4b58701ef54ad"} err="failed to get container status \"4af76b4808f7dc5886889bfb4a0d75d98cf9c31c61d2f3d762f4b58701ef54ad\": rpc error: code = NotFound desc = could not find container \"4af76b4808f7dc5886889bfb4a0d75d98cf9c31c61d2f3d762f4b58701ef54ad\": container with ID starting with 4af76b4808f7dc5886889bfb4a0d75d98cf9c31c61d2f3d762f4b58701ef54ad not found: ID does not exist" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.802724 4755 scope.go:117] "RemoveContainer" containerID="3135b9ca423f992987cdab3ae9179ef151afbc6ec38de245af48f4c5b16f3b94" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.804079 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3135b9ca423f992987cdab3ae9179ef151afbc6ec38de245af48f4c5b16f3b94\": container with ID starting with 3135b9ca423f992987cdab3ae9179ef151afbc6ec38de245af48f4c5b16f3b94 not found: ID does not exist" containerID="3135b9ca423f992987cdab3ae9179ef151afbc6ec38de245af48f4c5b16f3b94" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.804113 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3135b9ca423f992987cdab3ae9179ef151afbc6ec38de245af48f4c5b16f3b94"} err="failed to get container status \"3135b9ca423f992987cdab3ae9179ef151afbc6ec38de245af48f4c5b16f3b94\": rpc error: code = NotFound desc = could not find container \"3135b9ca423f992987cdab3ae9179ef151afbc6ec38de245af48f4c5b16f3b94\": container with ID starting with 3135b9ca423f992987cdab3ae9179ef151afbc6ec38de245af48f4c5b16f3b94 not found: ID does not exist" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.804131 4755 scope.go:117] "RemoveContainer" containerID="500c85f84af8fb83736af6b19c0b282434b665f5e78e7e2270b2ed50bdf6cb22" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.817354 4755 scope.go:117] "RemoveContainer" containerID="3d344a74d1815626ad9ff697fc614646f9cbf0e2683c420b80c98d0a5da8e51c" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.835171 4755 scope.go:117] "RemoveContainer" containerID="94c500fb7a230622d0f4540b263fd4dbed53d89fb25e6adec538d1eb7438ebd3" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.855308 4755 scope.go:117] "RemoveContainer" containerID="500c85f84af8fb83736af6b19c0b282434b665f5e78e7e2270b2ed50bdf6cb22" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.855685 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"500c85f84af8fb83736af6b19c0b282434b665f5e78e7e2270b2ed50bdf6cb22\": container with ID starting with 500c85f84af8fb83736af6b19c0b282434b665f5e78e7e2270b2ed50bdf6cb22 not found: ID does not exist" containerID="500c85f84af8fb83736af6b19c0b282434b665f5e78e7e2270b2ed50bdf6cb22" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.855742 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500c85f84af8fb83736af6b19c0b282434b665f5e78e7e2270b2ed50bdf6cb22"} err="failed to get container status \"500c85f84af8fb83736af6b19c0b282434b665f5e78e7e2270b2ed50bdf6cb22\": rpc error: code = NotFound desc = could not find container \"500c85f84af8fb83736af6b19c0b282434b665f5e78e7e2270b2ed50bdf6cb22\": container with ID starting with 500c85f84af8fb83736af6b19c0b282434b665f5e78e7e2270b2ed50bdf6cb22 not found: ID does not exist" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.855779 4755 scope.go:117] "RemoveContainer" containerID="3d344a74d1815626ad9ff697fc614646f9cbf0e2683c420b80c98d0a5da8e51c" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.856203 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d344a74d1815626ad9ff697fc614646f9cbf0e2683c420b80c98d0a5da8e51c\": container with ID starting with 3d344a74d1815626ad9ff697fc614646f9cbf0e2683c420b80c98d0a5da8e51c not found: ID does not exist" containerID="3d344a74d1815626ad9ff697fc614646f9cbf0e2683c420b80c98d0a5da8e51c" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.856263 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d344a74d1815626ad9ff697fc614646f9cbf0e2683c420b80c98d0a5da8e51c"} err="failed to get container status \"3d344a74d1815626ad9ff697fc614646f9cbf0e2683c420b80c98d0a5da8e51c\": rpc error: code = NotFound desc = could not find container \"3d344a74d1815626ad9ff697fc614646f9cbf0e2683c420b80c98d0a5da8e51c\": container with ID starting with 3d344a74d1815626ad9ff697fc614646f9cbf0e2683c420b80c98d0a5da8e51c not found: ID does not exist" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.856287 4755 scope.go:117] "RemoveContainer" containerID="94c500fb7a230622d0f4540b263fd4dbed53d89fb25e6adec538d1eb7438ebd3" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.856821 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94c500fb7a230622d0f4540b263fd4dbed53d89fb25e6adec538d1eb7438ebd3\": container with ID starting with 94c500fb7a230622d0f4540b263fd4dbed53d89fb25e6adec538d1eb7438ebd3 not found: ID does not exist" containerID="94c500fb7a230622d0f4540b263fd4dbed53d89fb25e6adec538d1eb7438ebd3" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.856838 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94c500fb7a230622d0f4540b263fd4dbed53d89fb25e6adec538d1eb7438ebd3"} err="failed to get container status \"94c500fb7a230622d0f4540b263fd4dbed53d89fb25e6adec538d1eb7438ebd3\": rpc error: code = NotFound desc = could not find container \"94c500fb7a230622d0f4540b263fd4dbed53d89fb25e6adec538d1eb7438ebd3\": container with ID starting with 94c500fb7a230622d0f4540b263fd4dbed53d89fb25e6adec538d1eb7438ebd3 not found: ID does not exist" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.856851 4755 scope.go:117] "RemoveContainer" containerID="d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.881142 4755 scope.go:117] "RemoveContainer" containerID="e74066db263cbe180bd8ddb91a49dd0dd23fc01efbf6fae4e947e800e319adcb" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.897012 4755 scope.go:117] "RemoveContainer" containerID="02340fd8a8d5b60e58f8085c7d639ae63f0352ad0eb303b02fc4362fdcf072b8" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.911786 4755 scope.go:117] "RemoveContainer" containerID="d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.912128 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4\": container with ID starting with d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4 not found: ID does not exist" containerID="d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.912185 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4"} err="failed to get container status \"d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4\": rpc error: code = NotFound desc = could not find container \"d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4\": container with ID starting with d3d10d6652d025f092a0162d22f049da7f76a95446b3952850c1fb076cf25db4 not found: ID does not exist" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.912215 4755 scope.go:117] "RemoveContainer" containerID="e74066db263cbe180bd8ddb91a49dd0dd23fc01efbf6fae4e947e800e319adcb" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.912529 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e74066db263cbe180bd8ddb91a49dd0dd23fc01efbf6fae4e947e800e319adcb\": container with ID starting with e74066db263cbe180bd8ddb91a49dd0dd23fc01efbf6fae4e947e800e319adcb not found: ID does not exist" containerID="e74066db263cbe180bd8ddb91a49dd0dd23fc01efbf6fae4e947e800e319adcb" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.912549 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e74066db263cbe180bd8ddb91a49dd0dd23fc01efbf6fae4e947e800e319adcb"} err="failed to get container status \"e74066db263cbe180bd8ddb91a49dd0dd23fc01efbf6fae4e947e800e319adcb\": rpc error: code = NotFound desc = could not find container \"e74066db263cbe180bd8ddb91a49dd0dd23fc01efbf6fae4e947e800e319adcb\": container with ID starting with e74066db263cbe180bd8ddb91a49dd0dd23fc01efbf6fae4e947e800e319adcb not found: ID does not exist" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.912568 4755 scope.go:117] "RemoveContainer" containerID="02340fd8a8d5b60e58f8085c7d639ae63f0352ad0eb303b02fc4362fdcf072b8" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.913879 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02340fd8a8d5b60e58f8085c7d639ae63f0352ad0eb303b02fc4362fdcf072b8\": container with ID starting with 02340fd8a8d5b60e58f8085c7d639ae63f0352ad0eb303b02fc4362fdcf072b8 not found: ID does not exist" containerID="02340fd8a8d5b60e58f8085c7d639ae63f0352ad0eb303b02fc4362fdcf072b8" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.913940 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02340fd8a8d5b60e58f8085c7d639ae63f0352ad0eb303b02fc4362fdcf072b8"} err="failed to get container status \"02340fd8a8d5b60e58f8085c7d639ae63f0352ad0eb303b02fc4362fdcf072b8\": rpc error: code = NotFound desc = could not find container \"02340fd8a8d5b60e58f8085c7d639ae63f0352ad0eb303b02fc4362fdcf072b8\": container with ID starting with 02340fd8a8d5b60e58f8085c7d639ae63f0352ad0eb303b02fc4362fdcf072b8 not found: ID does not exist" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.913962 4755 scope.go:117] "RemoveContainer" containerID="ec36d0d4564973440b1cc7faf41dd06f37e4ea1f41a7ceff9c14832c24043f49" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.929267 4755 scope.go:117] "RemoveContainer" containerID="fba14e95440e80e9364929e04927c02930f6c351c23f530704f2b7f449709502" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.944803 4755 scope.go:117] "RemoveContainer" containerID="289b398aff338fd93042c728c523826c1668f95bbf76b80b3a8bf255a18738ba" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.964911 4755 scope.go:117] "RemoveContainer" containerID="ec36d0d4564973440b1cc7faf41dd06f37e4ea1f41a7ceff9c14832c24043f49" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.965514 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec36d0d4564973440b1cc7faf41dd06f37e4ea1f41a7ceff9c14832c24043f49\": container with ID starting with ec36d0d4564973440b1cc7faf41dd06f37e4ea1f41a7ceff9c14832c24043f49 not found: ID does not exist" containerID="ec36d0d4564973440b1cc7faf41dd06f37e4ea1f41a7ceff9c14832c24043f49" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.965633 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec36d0d4564973440b1cc7faf41dd06f37e4ea1f41a7ceff9c14832c24043f49"} err="failed to get container status \"ec36d0d4564973440b1cc7faf41dd06f37e4ea1f41a7ceff9c14832c24043f49\": rpc error: code = NotFound desc = could not find container \"ec36d0d4564973440b1cc7faf41dd06f37e4ea1f41a7ceff9c14832c24043f49\": container with ID starting with ec36d0d4564973440b1cc7faf41dd06f37e4ea1f41a7ceff9c14832c24043f49 not found: ID does not exist" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.965727 4755 scope.go:117] "RemoveContainer" containerID="fba14e95440e80e9364929e04927c02930f6c351c23f530704f2b7f449709502" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.966184 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fba14e95440e80e9364929e04927c02930f6c351c23f530704f2b7f449709502\": container with ID starting with fba14e95440e80e9364929e04927c02930f6c351c23f530704f2b7f449709502 not found: ID does not exist" containerID="fba14e95440e80e9364929e04927c02930f6c351c23f530704f2b7f449709502" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.966278 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba14e95440e80e9364929e04927c02930f6c351c23f530704f2b7f449709502"} err="failed to get container status \"fba14e95440e80e9364929e04927c02930f6c351c23f530704f2b7f449709502\": rpc error: code = NotFound desc = could not find container \"fba14e95440e80e9364929e04927c02930f6c351c23f530704f2b7f449709502\": container with ID starting with fba14e95440e80e9364929e04927c02930f6c351c23f530704f2b7f449709502 not found: ID does not exist" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.966369 4755 scope.go:117] "RemoveContainer" containerID="289b398aff338fd93042c728c523826c1668f95bbf76b80b3a8bf255a18738ba" Feb 24 10:02:30 crc kubenswrapper[4755]: E0224 10:02:30.966791 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"289b398aff338fd93042c728c523826c1668f95bbf76b80b3a8bf255a18738ba\": container with ID starting with 289b398aff338fd93042c728c523826c1668f95bbf76b80b3a8bf255a18738ba not found: ID does not exist" containerID="289b398aff338fd93042c728c523826c1668f95bbf76b80b3a8bf255a18738ba" Feb 24 10:02:30 crc kubenswrapper[4755]: I0224 10:02:30.966832 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"289b398aff338fd93042c728c523826c1668f95bbf76b80b3a8bf255a18738ba"} err="failed to get container status \"289b398aff338fd93042c728c523826c1668f95bbf76b80b3a8bf255a18738ba\": rpc error: code = NotFound desc = could not find container \"289b398aff338fd93042c728c523826c1668f95bbf76b80b3a8bf255a18738ba\": container with ID starting with 289b398aff338fd93042c728c523826c1668f95bbf76b80b3a8bf255a18738ba not found: ID does not exist" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.168849 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k4trd"] Feb 24 10:02:31 crc kubenswrapper[4755]: E0224 10:02:31.169354 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" containerName="registry-server" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.169443 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" containerName="registry-server" Feb 24 10:02:31 crc kubenswrapper[4755]: E0224 10:02:31.169543 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b5e8fc-e79e-4cd2-906b-0d8116e4d608" containerName="extract-utilities" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.169632 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b5e8fc-e79e-4cd2-906b-0d8116e4d608" containerName="extract-utilities" Feb 24 10:02:31 crc kubenswrapper[4755]: E0224 10:02:31.169715 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e105e7e0-0046-47ca-8a73-c27e385a0301" containerName="marketplace-operator" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.169796 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e105e7e0-0046-47ca-8a73-c27e385a0301" containerName="marketplace-operator" Feb 24 10:02:31 crc kubenswrapper[4755]: E0224 10:02:31.169886 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" containerName="extract-content" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.169962 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" containerName="extract-content" Feb 24 10:02:31 crc kubenswrapper[4755]: E0224 10:02:31.170040 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" containerName="extract-utilities" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.170152 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" containerName="extract-utilities" Feb 24 10:02:31 crc kubenswrapper[4755]: E0224 10:02:31.170242 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fce8bb3-2fc9-496a-b0c0-873427d27571" containerName="extract-content" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.170317 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fce8bb3-2fc9-496a-b0c0-873427d27571" containerName="extract-content" Feb 24 10:02:31 crc kubenswrapper[4755]: E0224 10:02:31.170378 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fce8bb3-2fc9-496a-b0c0-873427d27571" containerName="registry-server" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.170437 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fce8bb3-2fc9-496a-b0c0-873427d27571" containerName="registry-server" Feb 24 10:02:31 crc kubenswrapper[4755]: E0224 10:02:31.170503 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b5e8fc-e79e-4cd2-906b-0d8116e4d608" containerName="registry-server" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.170564 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b5e8fc-e79e-4cd2-906b-0d8116e4d608" containerName="registry-server" Feb 24 10:02:31 crc kubenswrapper[4755]: E0224 10:02:31.170626 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e105e7e0-0046-47ca-8a73-c27e385a0301" containerName="marketplace-operator" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.170685 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e105e7e0-0046-47ca-8a73-c27e385a0301" containerName="marketplace-operator" Feb 24 10:02:31 crc kubenswrapper[4755]: E0224 10:02:31.170744 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" containerName="extract-content" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.170797 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" containerName="extract-content" Feb 24 10:02:31 crc kubenswrapper[4755]: E0224 10:02:31.170851 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" containerName="registry-server" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.170912 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" containerName="registry-server" Feb 24 10:02:31 crc kubenswrapper[4755]: E0224 10:02:31.170977 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b5e8fc-e79e-4cd2-906b-0d8116e4d608" containerName="extract-content" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.171039 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b5e8fc-e79e-4cd2-906b-0d8116e4d608" containerName="extract-content" Feb 24 10:02:31 crc kubenswrapper[4755]: E0224 10:02:31.171114 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fce8bb3-2fc9-496a-b0c0-873427d27571" containerName="extract-utilities" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.171168 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fce8bb3-2fc9-496a-b0c0-873427d27571" containerName="extract-utilities" Feb 24 10:02:31 crc kubenswrapper[4755]: E0224 10:02:31.171233 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" containerName="extract-utilities" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.171289 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" containerName="extract-utilities" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.171511 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fce8bb3-2fc9-496a-b0c0-873427d27571" containerName="registry-server" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.171604 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" containerName="registry-server" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.171688 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="e105e7e0-0046-47ca-8a73-c27e385a0301" containerName="marketplace-operator" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.171750 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" containerName="registry-server" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.171823 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="47b5e8fc-e79e-4cd2-906b-0d8116e4d608" containerName="registry-server" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.171891 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="e105e7e0-0046-47ca-8a73-c27e385a0301" containerName="marketplace-operator" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.172753 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.175102 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.182834 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k4trd"] Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.193267 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481780d1-bd6b-4674-a476-9e10935c1927-catalog-content\") pod \"certified-operators-k4trd\" (UID: \"481780d1-bd6b-4674-a476-9e10935c1927\") " pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.193657 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481780d1-bd6b-4674-a476-9e10935c1927-utilities\") pod \"certified-operators-k4trd\" (UID: \"481780d1-bd6b-4674-a476-9e10935c1927\") " pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.193724 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zplmm\" (UniqueName: \"kubernetes.io/projected/481780d1-bd6b-4674-a476-9e10935c1927-kube-api-access-zplmm\") pod \"certified-operators-k4trd\" (UID: \"481780d1-bd6b-4674-a476-9e10935c1927\") " pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.294374 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481780d1-bd6b-4674-a476-9e10935c1927-utilities\") pod \"certified-operators-k4trd\" (UID: \"481780d1-bd6b-4674-a476-9e10935c1927\") " pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.294450 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zplmm\" (UniqueName: \"kubernetes.io/projected/481780d1-bd6b-4674-a476-9e10935c1927-kube-api-access-zplmm\") pod \"certified-operators-k4trd\" (UID: \"481780d1-bd6b-4674-a476-9e10935c1927\") " pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.294486 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481780d1-bd6b-4674-a476-9e10935c1927-catalog-content\") pod \"certified-operators-k4trd\" (UID: \"481780d1-bd6b-4674-a476-9e10935c1927\") " pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.294849 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481780d1-bd6b-4674-a476-9e10935c1927-utilities\") pod \"certified-operators-k4trd\" (UID: \"481780d1-bd6b-4674-a476-9e10935c1927\") " pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.294888 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481780d1-bd6b-4674-a476-9e10935c1927-catalog-content\") pod \"certified-operators-k4trd\" (UID: \"481780d1-bd6b-4674-a476-9e10935c1927\") " pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.315361 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zplmm\" (UniqueName: \"kubernetes.io/projected/481780d1-bd6b-4674-a476-9e10935c1927-kube-api-access-zplmm\") pod \"certified-operators-k4trd\" (UID: \"481780d1-bd6b-4674-a476-9e10935c1927\") " pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:31 crc kubenswrapper[4755]: I0224 10:02:31.505848 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:31.687093 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" event={"ID":"b0157f55-02e7-4faf-bbce-b5e2b41cfda9","Type":"ContainerStarted","Data":"579d0d36b9745eef7f74114f1b10f51187e0a730dec2618d70185cc7d8b6855e"} Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:31.687144 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" event={"ID":"b0157f55-02e7-4faf-bbce-b5e2b41cfda9","Type":"ContainerStarted","Data":"6b54afc27e295784eca690833d25d2425f91fc996a5b257d25077faf26562ab2"} Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:31.688026 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:31.699217 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:31.724402 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nhdbm" podStartSLOduration=2.724317602 podStartE2EDuration="2.724317602s" podCreationTimestamp="2026-02-24 10:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:02:31.717835015 +0000 UTC m=+456.174357598" watchObservedRunningTime="2026-02-24 10:02:31.724317602 +0000 UTC m=+456.180840165" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:31.761934 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k4trd"] Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.174330 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-98x65"] Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.176145 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.178995 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.184899 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98x65"] Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.215021 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngn7c\" (UniqueName: \"kubernetes.io/projected/dbe85769-d394-4e96-a35e-cbf888b52bfa-kube-api-access-ngn7c\") pod \"community-operators-98x65\" (UID: \"dbe85769-d394-4e96-a35e-cbf888b52bfa\") " pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.215185 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe85769-d394-4e96-a35e-cbf888b52bfa-utilities\") pod \"community-operators-98x65\" (UID: \"dbe85769-d394-4e96-a35e-cbf888b52bfa\") " pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.215216 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe85769-d394-4e96-a35e-cbf888b52bfa-catalog-content\") pod \"community-operators-98x65\" (UID: \"dbe85769-d394-4e96-a35e-cbf888b52bfa\") " pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.316876 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngn7c\" (UniqueName: \"kubernetes.io/projected/dbe85769-d394-4e96-a35e-cbf888b52bfa-kube-api-access-ngn7c\") pod \"community-operators-98x65\" (UID: \"dbe85769-d394-4e96-a35e-cbf888b52bfa\") " pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.316962 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe85769-d394-4e96-a35e-cbf888b52bfa-utilities\") pod \"community-operators-98x65\" (UID: \"dbe85769-d394-4e96-a35e-cbf888b52bfa\") " pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.317004 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe85769-d394-4e96-a35e-cbf888b52bfa-catalog-content\") pod \"community-operators-98x65\" (UID: \"dbe85769-d394-4e96-a35e-cbf888b52bfa\") " pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.317758 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe85769-d394-4e96-a35e-cbf888b52bfa-utilities\") pod \"community-operators-98x65\" (UID: \"dbe85769-d394-4e96-a35e-cbf888b52bfa\") " pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.318226 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe85769-d394-4e96-a35e-cbf888b52bfa-catalog-content\") pod \"community-operators-98x65\" (UID: \"dbe85769-d394-4e96-a35e-cbf888b52bfa\") " pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.325102 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1" path="/var/lib/kubelet/pods/2a07bcaf-0f1c-478d-9ba5-cdb4a1e912d1/volumes" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.325645 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47b5e8fc-e79e-4cd2-906b-0d8116e4d608" path="/var/lib/kubelet/pods/47b5e8fc-e79e-4cd2-906b-0d8116e4d608/volumes" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.326214 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fce8bb3-2fc9-496a-b0c0-873427d27571" path="/var/lib/kubelet/pods/9fce8bb3-2fc9-496a-b0c0-873427d27571/volumes" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.327295 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e105e7e0-0046-47ca-8a73-c27e385a0301" path="/var/lib/kubelet/pods/e105e7e0-0046-47ca-8a73-c27e385a0301/volumes" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.327713 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4344c16-3181-42d3-9d94-6cccd3fe8cc0" path="/var/lib/kubelet/pods/f4344c16-3181-42d3-9d94-6cccd3fe8cc0/volumes" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.337001 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngn7c\" (UniqueName: \"kubernetes.io/projected/dbe85769-d394-4e96-a35e-cbf888b52bfa-kube-api-access-ngn7c\") pod \"community-operators-98x65\" (UID: \"dbe85769-d394-4e96-a35e-cbf888b52bfa\") " pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.495442 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.699030 4755 generic.go:334] "Generic (PLEG): container finished" podID="481780d1-bd6b-4674-a476-9e10935c1927" containerID="60b54b2ceb2b0a3344bacb81884675d854f347e3a06b1bedcb5a81e175e4fd0b" exitCode=0 Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.699124 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4trd" event={"ID":"481780d1-bd6b-4674-a476-9e10935c1927","Type":"ContainerDied","Data":"60b54b2ceb2b0a3344bacb81884675d854f347e3a06b1bedcb5a81e175e4fd0b"} Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.699188 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4trd" event={"ID":"481780d1-bd6b-4674-a476-9e10935c1927","Type":"ContainerStarted","Data":"8a7061440280816dcbe778611812428337bec0bf5131ff93d630ed521e5b7157"} Feb 24 10:02:32 crc kubenswrapper[4755]: I0224 10:02:32.873291 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98x65"] Feb 24 10:02:32 crc kubenswrapper[4755]: W0224 10:02:32.880843 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbe85769_d394_4e96_a35e_cbf888b52bfa.slice/crio-c3e29f04c665af2600811d70d64300585f751cf13c377df85bdea36d6d371656 WatchSource:0}: Error finding container c3e29f04c665af2600811d70d64300585f751cf13c377df85bdea36d6d371656: Status 404 returned error can't find the container with id c3e29f04c665af2600811d70d64300585f751cf13c377df85bdea36d6d371656 Feb 24 10:02:33 crc kubenswrapper[4755]: I0224 10:02:33.707250 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4trd" event={"ID":"481780d1-bd6b-4674-a476-9e10935c1927","Type":"ContainerStarted","Data":"5d9fb6702f511a2f17e80d41ab0f00b03ada6a0a6149a70e3544c1917023191e"} Feb 24 10:02:33 crc kubenswrapper[4755]: I0224 10:02:33.709472 4755 generic.go:334] "Generic (PLEG): container finished" podID="dbe85769-d394-4e96-a35e-cbf888b52bfa" containerID="1afac8bcb6e654053b77ec79c0e37075fe735f76789d248486920e11ab68a522" exitCode=0 Feb 24 10:02:33 crc kubenswrapper[4755]: I0224 10:02:33.709589 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98x65" event={"ID":"dbe85769-d394-4e96-a35e-cbf888b52bfa","Type":"ContainerDied","Data":"1afac8bcb6e654053b77ec79c0e37075fe735f76789d248486920e11ab68a522"} Feb 24 10:02:33 crc kubenswrapper[4755]: I0224 10:02:33.709636 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98x65" event={"ID":"dbe85769-d394-4e96-a35e-cbf888b52bfa","Type":"ContainerStarted","Data":"c3e29f04c665af2600811d70d64300585f751cf13c377df85bdea36d6d371656"} Feb 24 10:02:33 crc kubenswrapper[4755]: I0224 10:02:33.975251 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xct8p"] Feb 24 10:02:33 crc kubenswrapper[4755]: I0224 10:02:33.976454 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:33.984584 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:33.996471 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xct8p"] Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.035217 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kbbz\" (UniqueName: \"kubernetes.io/projected/bb603a7a-33b9-4d51-b39b-43e58685bc2f-kube-api-access-5kbbz\") pod \"redhat-marketplace-xct8p\" (UID: \"bb603a7a-33b9-4d51-b39b-43e58685bc2f\") " pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.035329 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb603a7a-33b9-4d51-b39b-43e58685bc2f-catalog-content\") pod \"redhat-marketplace-xct8p\" (UID: \"bb603a7a-33b9-4d51-b39b-43e58685bc2f\") " pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.035388 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb603a7a-33b9-4d51-b39b-43e58685bc2f-utilities\") pod \"redhat-marketplace-xct8p\" (UID: \"bb603a7a-33b9-4d51-b39b-43e58685bc2f\") " pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.138126 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kbbz\" (UniqueName: \"kubernetes.io/projected/bb603a7a-33b9-4d51-b39b-43e58685bc2f-kube-api-access-5kbbz\") pod \"redhat-marketplace-xct8p\" (UID: \"bb603a7a-33b9-4d51-b39b-43e58685bc2f\") " pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.138241 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb603a7a-33b9-4d51-b39b-43e58685bc2f-catalog-content\") pod \"redhat-marketplace-xct8p\" (UID: \"bb603a7a-33b9-4d51-b39b-43e58685bc2f\") " pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.138292 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb603a7a-33b9-4d51-b39b-43e58685bc2f-utilities\") pod \"redhat-marketplace-xct8p\" (UID: \"bb603a7a-33b9-4d51-b39b-43e58685bc2f\") " pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.138940 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb603a7a-33b9-4d51-b39b-43e58685bc2f-utilities\") pod \"redhat-marketplace-xct8p\" (UID: \"bb603a7a-33b9-4d51-b39b-43e58685bc2f\") " pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.138991 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb603a7a-33b9-4d51-b39b-43e58685bc2f-catalog-content\") pod \"redhat-marketplace-xct8p\" (UID: \"bb603a7a-33b9-4d51-b39b-43e58685bc2f\") " pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.176490 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kbbz\" (UniqueName: \"kubernetes.io/projected/bb603a7a-33b9-4d51-b39b-43e58685bc2f-kube-api-access-5kbbz\") pod \"redhat-marketplace-xct8p\" (UID: \"bb603a7a-33b9-4d51-b39b-43e58685bc2f\") " pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.334813 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.569398 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lb9tr"] Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.570765 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.572877 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.583801 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lb9tr"] Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.643166 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d29a0ff-cae3-433b-9377-2f3beca596df-utilities\") pod \"redhat-operators-lb9tr\" (UID: \"6d29a0ff-cae3-433b-9377-2f3beca596df\") " pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.643257 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d29a0ff-cae3-433b-9377-2f3beca596df-catalog-content\") pod \"redhat-operators-lb9tr\" (UID: \"6d29a0ff-cae3-433b-9377-2f3beca596df\") " pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.643278 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v77rw\" (UniqueName: \"kubernetes.io/projected/6d29a0ff-cae3-433b-9377-2f3beca596df-kube-api-access-v77rw\") pod \"redhat-operators-lb9tr\" (UID: \"6d29a0ff-cae3-433b-9377-2f3beca596df\") " pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.717169 4755 generic.go:334] "Generic (PLEG): container finished" podID="481780d1-bd6b-4674-a476-9e10935c1927" containerID="5d9fb6702f511a2f17e80d41ab0f00b03ada6a0a6149a70e3544c1917023191e" exitCode=0 Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.717234 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4trd" event={"ID":"481780d1-bd6b-4674-a476-9e10935c1927","Type":"ContainerDied","Data":"5d9fb6702f511a2f17e80d41ab0f00b03ada6a0a6149a70e3544c1917023191e"} Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.719347 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98x65" event={"ID":"dbe85769-d394-4e96-a35e-cbf888b52bfa","Type":"ContainerStarted","Data":"93784774c3a79add455295a69a7396b4ae97fba045bd2f2031a434a547fe62f1"} Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.744675 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d29a0ff-cae3-433b-9377-2f3beca596df-catalog-content\") pod \"redhat-operators-lb9tr\" (UID: \"6d29a0ff-cae3-433b-9377-2f3beca596df\") " pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.744716 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v77rw\" (UniqueName: \"kubernetes.io/projected/6d29a0ff-cae3-433b-9377-2f3beca596df-kube-api-access-v77rw\") pod \"redhat-operators-lb9tr\" (UID: \"6d29a0ff-cae3-433b-9377-2f3beca596df\") " pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.744790 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d29a0ff-cae3-433b-9377-2f3beca596df-utilities\") pod \"redhat-operators-lb9tr\" (UID: \"6d29a0ff-cae3-433b-9377-2f3beca596df\") " pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.745375 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d29a0ff-cae3-433b-9377-2f3beca596df-utilities\") pod \"redhat-operators-lb9tr\" (UID: \"6d29a0ff-cae3-433b-9377-2f3beca596df\") " pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.745985 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d29a0ff-cae3-433b-9377-2f3beca596df-catalog-content\") pod \"redhat-operators-lb9tr\" (UID: \"6d29a0ff-cae3-433b-9377-2f3beca596df\") " pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.763945 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xct8p"] Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.770794 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v77rw\" (UniqueName: \"kubernetes.io/projected/6d29a0ff-cae3-433b-9377-2f3beca596df-kube-api-access-v77rw\") pod \"redhat-operators-lb9tr\" (UID: \"6d29a0ff-cae3-433b-9377-2f3beca596df\") " pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:34 crc kubenswrapper[4755]: W0224 10:02:34.819192 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb603a7a_33b9_4d51_b39b_43e58685bc2f.slice/crio-34234eb90b37dca470afcdc93eec55ab17ed22b61aa68ff359f9f15ae4ba0b57 WatchSource:0}: Error finding container 34234eb90b37dca470afcdc93eec55ab17ed22b61aa68ff359f9f15ae4ba0b57: Status 404 returned error can't find the container with id 34234eb90b37dca470afcdc93eec55ab17ed22b61aa68ff359f9f15ae4ba0b57 Feb 24 10:02:34 crc kubenswrapper[4755]: I0224 10:02:34.889305 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:35 crc kubenswrapper[4755]: I0224 10:02:35.143486 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lb9tr"] Feb 24 10:02:35 crc kubenswrapper[4755]: W0224 10:02:35.149088 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d29a0ff_cae3_433b_9377_2f3beca596df.slice/crio-519dd15bf0788dc0380ba810ea3ec3e38420d03e1aab2cd3fb8f63b6e1dd1767 WatchSource:0}: Error finding container 519dd15bf0788dc0380ba810ea3ec3e38420d03e1aab2cd3fb8f63b6e1dd1767: Status 404 returned error can't find the container with id 519dd15bf0788dc0380ba810ea3ec3e38420d03e1aab2cd3fb8f63b6e1dd1767 Feb 24 10:02:35 crc kubenswrapper[4755]: I0224 10:02:35.725968 4755 generic.go:334] "Generic (PLEG): container finished" podID="6d29a0ff-cae3-433b-9377-2f3beca596df" containerID="ed495aa166f3684beaa50b7b98031364b2e7246fc6c6c58ca0d4d0494fb644d2" exitCode=0 Feb 24 10:02:35 crc kubenswrapper[4755]: I0224 10:02:35.726046 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb9tr" event={"ID":"6d29a0ff-cae3-433b-9377-2f3beca596df","Type":"ContainerDied","Data":"ed495aa166f3684beaa50b7b98031364b2e7246fc6c6c58ca0d4d0494fb644d2"} Feb 24 10:02:35 crc kubenswrapper[4755]: I0224 10:02:35.726091 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb9tr" event={"ID":"6d29a0ff-cae3-433b-9377-2f3beca596df","Type":"ContainerStarted","Data":"519dd15bf0788dc0380ba810ea3ec3e38420d03e1aab2cd3fb8f63b6e1dd1767"} Feb 24 10:02:35 crc kubenswrapper[4755]: I0224 10:02:35.730355 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4trd" event={"ID":"481780d1-bd6b-4674-a476-9e10935c1927","Type":"ContainerStarted","Data":"ab0d2b2a0a65767b5599eff7994ae7382cd47cea5bb5cfaae79d888485c11508"} Feb 24 10:02:35 crc kubenswrapper[4755]: I0224 10:02:35.732035 4755 generic.go:334] "Generic (PLEG): container finished" podID="dbe85769-d394-4e96-a35e-cbf888b52bfa" containerID="93784774c3a79add455295a69a7396b4ae97fba045bd2f2031a434a547fe62f1" exitCode=0 Feb 24 10:02:35 crc kubenswrapper[4755]: I0224 10:02:35.732174 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98x65" event={"ID":"dbe85769-d394-4e96-a35e-cbf888b52bfa","Type":"ContainerDied","Data":"93784774c3a79add455295a69a7396b4ae97fba045bd2f2031a434a547fe62f1"} Feb 24 10:02:35 crc kubenswrapper[4755]: I0224 10:02:35.741101 4755 generic.go:334] "Generic (PLEG): container finished" podID="bb603a7a-33b9-4d51-b39b-43e58685bc2f" containerID="4900242b07a4eaddb08c054392c637128a8e917f540f59e4d70164f28e6e7b8d" exitCode=0 Feb 24 10:02:35 crc kubenswrapper[4755]: I0224 10:02:35.741174 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xct8p" event={"ID":"bb603a7a-33b9-4d51-b39b-43e58685bc2f","Type":"ContainerDied","Data":"4900242b07a4eaddb08c054392c637128a8e917f540f59e4d70164f28e6e7b8d"} Feb 24 10:02:35 crc kubenswrapper[4755]: I0224 10:02:35.741219 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xct8p" event={"ID":"bb603a7a-33b9-4d51-b39b-43e58685bc2f","Type":"ContainerStarted","Data":"34234eb90b37dca470afcdc93eec55ab17ed22b61aa68ff359f9f15ae4ba0b57"} Feb 24 10:02:35 crc kubenswrapper[4755]: I0224 10:02:35.771654 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k4trd" podStartSLOduration=2.285953782 podStartE2EDuration="4.771623926s" podCreationTimestamp="2026-02-24 10:02:31 +0000 UTC" firstStartedPulling="2026-02-24 10:02:32.700959716 +0000 UTC m=+457.157482249" lastFinishedPulling="2026-02-24 10:02:35.18662985 +0000 UTC m=+459.643152393" observedRunningTime="2026-02-24 10:02:35.766358156 +0000 UTC m=+460.222880789" watchObservedRunningTime="2026-02-24 10:02:35.771623926 +0000 UTC m=+460.228146509" Feb 24 10:02:36 crc kubenswrapper[4755]: I0224 10:02:36.749371 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb9tr" event={"ID":"6d29a0ff-cae3-433b-9377-2f3beca596df","Type":"ContainerStarted","Data":"f3bc863d916d118f98ffe1fb9a2b28e217b33397eab6c04d1564ed8d9c0ad610"} Feb 24 10:02:36 crc kubenswrapper[4755]: I0224 10:02:36.756341 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98x65" event={"ID":"dbe85769-d394-4e96-a35e-cbf888b52bfa","Type":"ContainerStarted","Data":"6a4797155fb65617030fc79a8c64e427ea14453736b1f4906121103dfdff2413"} Feb 24 10:02:36 crc kubenswrapper[4755]: I0224 10:02:36.759163 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xct8p" event={"ID":"bb603a7a-33b9-4d51-b39b-43e58685bc2f","Type":"ContainerStarted","Data":"322fdd070e186350aaa0ad8e00596de472d6f1bf1efdc3cdbdf382d15739c57f"} Feb 24 10:02:36 crc kubenswrapper[4755]: I0224 10:02:36.805540 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-98x65" podStartSLOduration=2.4148207839999998 podStartE2EDuration="4.80551964s" podCreationTimestamp="2026-02-24 10:02:32 +0000 UTC" firstStartedPulling="2026-02-24 10:02:33.71082043 +0000 UTC m=+458.167342973" lastFinishedPulling="2026-02-24 10:02:36.101519286 +0000 UTC m=+460.558041829" observedRunningTime="2026-02-24 10:02:36.805040706 +0000 UTC m=+461.261563259" watchObservedRunningTime="2026-02-24 10:02:36.80551964 +0000 UTC m=+461.262042173" Feb 24 10:02:37 crc kubenswrapper[4755]: I0224 10:02:37.765813 4755 generic.go:334] "Generic (PLEG): container finished" podID="6d29a0ff-cae3-433b-9377-2f3beca596df" containerID="f3bc863d916d118f98ffe1fb9a2b28e217b33397eab6c04d1564ed8d9c0ad610" exitCode=0 Feb 24 10:02:37 crc kubenswrapper[4755]: I0224 10:02:37.765887 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb9tr" event={"ID":"6d29a0ff-cae3-433b-9377-2f3beca596df","Type":"ContainerDied","Data":"f3bc863d916d118f98ffe1fb9a2b28e217b33397eab6c04d1564ed8d9c0ad610"} Feb 24 10:02:37 crc kubenswrapper[4755]: I0224 10:02:37.769251 4755 generic.go:334] "Generic (PLEG): container finished" podID="bb603a7a-33b9-4d51-b39b-43e58685bc2f" containerID="322fdd070e186350aaa0ad8e00596de472d6f1bf1efdc3cdbdf382d15739c57f" exitCode=0 Feb 24 10:02:37 crc kubenswrapper[4755]: I0224 10:02:37.769374 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xct8p" event={"ID":"bb603a7a-33b9-4d51-b39b-43e58685bc2f","Type":"ContainerDied","Data":"322fdd070e186350aaa0ad8e00596de472d6f1bf1efdc3cdbdf382d15739c57f"} Feb 24 10:02:38 crc kubenswrapper[4755]: I0224 10:02:38.775798 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xct8p" event={"ID":"bb603a7a-33b9-4d51-b39b-43e58685bc2f","Type":"ContainerStarted","Data":"686ca10c41f203ee481c6c7639fdcf1dcdd6cfe7adb87bb571f6b3c5d39b9908"} Feb 24 10:02:38 crc kubenswrapper[4755]: I0224 10:02:38.778600 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb9tr" event={"ID":"6d29a0ff-cae3-433b-9377-2f3beca596df","Type":"ContainerStarted","Data":"d31c3a585343703d4a309a77e01cf6dc61a1945edf0f7c844cbddaca9ec59989"} Feb 24 10:02:38 crc kubenswrapper[4755]: I0224 10:02:38.797732 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xct8p" podStartSLOduration=3.369903756 podStartE2EDuration="5.797715431s" podCreationTimestamp="2026-02-24 10:02:33 +0000 UTC" firstStartedPulling="2026-02-24 10:02:35.743034387 +0000 UTC m=+460.199556970" lastFinishedPulling="2026-02-24 10:02:38.170846102 +0000 UTC m=+462.627368645" observedRunningTime="2026-02-24 10:02:38.793145071 +0000 UTC m=+463.249667614" watchObservedRunningTime="2026-02-24 10:02:38.797715431 +0000 UTC m=+463.254237984" Feb 24 10:02:41 crc kubenswrapper[4755]: I0224 10:02:41.506005 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:41 crc kubenswrapper[4755]: I0224 10:02:41.506088 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:41 crc kubenswrapper[4755]: I0224 10:02:41.550237 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:41 crc kubenswrapper[4755]: I0224 10:02:41.569936 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lb9tr" podStartSLOduration=5.094163415 podStartE2EDuration="7.569920976s" podCreationTimestamp="2026-02-24 10:02:34 +0000 UTC" firstStartedPulling="2026-02-24 10:02:35.732341552 +0000 UTC m=+460.188864095" lastFinishedPulling="2026-02-24 10:02:38.208099093 +0000 UTC m=+462.664621656" observedRunningTime="2026-02-24 10:02:38.817680558 +0000 UTC m=+463.274203091" watchObservedRunningTime="2026-02-24 10:02:41.569920976 +0000 UTC m=+466.026443519" Feb 24 10:02:41 crc kubenswrapper[4755]: I0224 10:02:41.859128 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:02:42 crc kubenswrapper[4755]: I0224 10:02:42.496101 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:42 crc kubenswrapper[4755]: I0224 10:02:42.496183 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:42 crc kubenswrapper[4755]: I0224 10:02:42.532748 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:42 crc kubenswrapper[4755]: I0224 10:02:42.874116 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-98x65" Feb 24 10:02:44 crc kubenswrapper[4755]: I0224 10:02:44.335307 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:44 crc kubenswrapper[4755]: I0224 10:02:44.335594 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:44 crc kubenswrapper[4755]: I0224 10:02:44.424610 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:44 crc kubenswrapper[4755]: I0224 10:02:44.863823 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xct8p" Feb 24 10:02:44 crc kubenswrapper[4755]: I0224 10:02:44.890399 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:44 crc kubenswrapper[4755]: I0224 10:02:44.890441 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:44 crc kubenswrapper[4755]: I0224 10:02:44.937405 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:45 crc kubenswrapper[4755]: I0224 10:02:45.876956 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lb9tr" Feb 24 10:02:47 crc kubenswrapper[4755]: I0224 10:02:47.551877 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-bj8q9" Feb 24 10:02:47 crc kubenswrapper[4755]: I0224 10:02:47.692056 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7tncz"] Feb 24 10:02:51 crc kubenswrapper[4755]: I0224 10:02:51.695112 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:02:51 crc kubenswrapper[4755]: I0224 10:02:51.695179 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:02:51 crc kubenswrapper[4755]: I0224 10:02:51.695242 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 10:02:51 crc kubenswrapper[4755]: I0224 10:02:51.695709 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e30cd3f07dec468034b68f42462aa6b03bd99da31c7eea2aa712e6bd5b08ae2"} pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 10:02:51 crc kubenswrapper[4755]: I0224 10:02:51.695757 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" containerID="cri-o://6e30cd3f07dec468034b68f42462aa6b03bd99da31c7eea2aa712e6bd5b08ae2" gracePeriod=600 Feb 24 10:02:51 crc kubenswrapper[4755]: I0224 10:02:51.854281 4755 generic.go:334] "Generic (PLEG): container finished" podID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerID="6e30cd3f07dec468034b68f42462aa6b03bd99da31c7eea2aa712e6bd5b08ae2" exitCode=0 Feb 24 10:02:51 crc kubenswrapper[4755]: I0224 10:02:51.854347 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerDied","Data":"6e30cd3f07dec468034b68f42462aa6b03bd99da31c7eea2aa712e6bd5b08ae2"} Feb 24 10:02:51 crc kubenswrapper[4755]: I0224 10:02:51.854578 4755 scope.go:117] "RemoveContainer" containerID="2161eb648464d24c62a458b7a658d549183c8fed8a6904d37a1bffc7930d992e" Feb 24 10:02:52 crc kubenswrapper[4755]: I0224 10:02:52.864769 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"1068b1d0f36c6f38acbfcad7f95ed44610ef96865713510472e65e78bddf609c"} Feb 24 10:02:56 crc kubenswrapper[4755]: I0224 10:02:56.720338 4755 scope.go:117] "RemoveContainer" containerID="8f0866820124dee85c6f10c84c850dc6d46e143349dbf3e5d52101adb7dc32f1" Feb 24 10:03:12 crc kubenswrapper[4755]: I0224 10:03:12.726159 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" podUID="275884c2-8599-4867-97aa-04d67ba35182" containerName="registry" containerID="cri-o://d8ca99e0460dc7e8f287447cffe7db5d085357becb24a02fc148cca2f8a1ab95" gracePeriod=30 Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.657827 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.689990 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/275884c2-8599-4867-97aa-04d67ba35182-ca-trust-extracted\") pod \"275884c2-8599-4867-97aa-04d67ba35182\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.690318 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-bound-sa-token\") pod \"275884c2-8599-4867-97aa-04d67ba35182\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.690444 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/275884c2-8599-4867-97aa-04d67ba35182-registry-certificates\") pod \"275884c2-8599-4867-97aa-04d67ba35182\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.690559 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t4zj\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-kube-api-access-4t4zj\") pod \"275884c2-8599-4867-97aa-04d67ba35182\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.690657 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/275884c2-8599-4867-97aa-04d67ba35182-trusted-ca\") pod \"275884c2-8599-4867-97aa-04d67ba35182\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.690893 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"275884c2-8599-4867-97aa-04d67ba35182\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.691004 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-registry-tls\") pod \"275884c2-8599-4867-97aa-04d67ba35182\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.691127 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/275884c2-8599-4867-97aa-04d67ba35182-installation-pull-secrets\") pod \"275884c2-8599-4867-97aa-04d67ba35182\" (UID: \"275884c2-8599-4867-97aa-04d67ba35182\") " Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.691599 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275884c2-8599-4867-97aa-04d67ba35182-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "275884c2-8599-4867-97aa-04d67ba35182" (UID: "275884c2-8599-4867-97aa-04d67ba35182"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.691629 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275884c2-8599-4867-97aa-04d67ba35182-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "275884c2-8599-4867-97aa-04d67ba35182" (UID: "275884c2-8599-4867-97aa-04d67ba35182"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.696681 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "275884c2-8599-4867-97aa-04d67ba35182" (UID: "275884c2-8599-4867-97aa-04d67ba35182"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.696856 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275884c2-8599-4867-97aa-04d67ba35182-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "275884c2-8599-4867-97aa-04d67ba35182" (UID: "275884c2-8599-4867-97aa-04d67ba35182"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.697902 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "275884c2-8599-4867-97aa-04d67ba35182" (UID: "275884c2-8599-4867-97aa-04d67ba35182"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.699501 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-kube-api-access-4t4zj" (OuterVolumeSpecName: "kube-api-access-4t4zj") pod "275884c2-8599-4867-97aa-04d67ba35182" (UID: "275884c2-8599-4867-97aa-04d67ba35182"). InnerVolumeSpecName "kube-api-access-4t4zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.718349 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/275884c2-8599-4867-97aa-04d67ba35182-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "275884c2-8599-4867-97aa-04d67ba35182" (UID: "275884c2-8599-4867-97aa-04d67ba35182"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.718622 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "275884c2-8599-4867-97aa-04d67ba35182" (UID: "275884c2-8599-4867-97aa-04d67ba35182"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.793162 4755 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/275884c2-8599-4867-97aa-04d67ba35182-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.793225 4755 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.793247 4755 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/275884c2-8599-4867-97aa-04d67ba35182-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.793298 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t4zj\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-kube-api-access-4t4zj\") on node \"crc\" DevicePath \"\"" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.793322 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/275884c2-8599-4867-97aa-04d67ba35182-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.793340 4755 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/275884c2-8599-4867-97aa-04d67ba35182-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 24 10:03:13 crc kubenswrapper[4755]: I0224 10:03:13.793358 4755 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/275884c2-8599-4867-97aa-04d67ba35182-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 24 10:03:14 crc kubenswrapper[4755]: I0224 10:03:14.005277 4755 generic.go:334] "Generic (PLEG): container finished" podID="275884c2-8599-4867-97aa-04d67ba35182" containerID="d8ca99e0460dc7e8f287447cffe7db5d085357becb24a02fc148cca2f8a1ab95" exitCode=0 Feb 24 10:03:14 crc kubenswrapper[4755]: I0224 10:03:14.005352 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" event={"ID":"275884c2-8599-4867-97aa-04d67ba35182","Type":"ContainerDied","Data":"d8ca99e0460dc7e8f287447cffe7db5d085357becb24a02fc148cca2f8a1ab95"} Feb 24 10:03:14 crc kubenswrapper[4755]: I0224 10:03:14.005409 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" event={"ID":"275884c2-8599-4867-97aa-04d67ba35182","Type":"ContainerDied","Data":"72b1f54ca806d56403ee67bd02409c87980e9f4004ae7c74c7b57daf736b643f"} Feb 24 10:03:14 crc kubenswrapper[4755]: I0224 10:03:14.005415 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7tncz" Feb 24 10:03:14 crc kubenswrapper[4755]: I0224 10:03:14.005438 4755 scope.go:117] "RemoveContainer" containerID="d8ca99e0460dc7e8f287447cffe7db5d085357becb24a02fc148cca2f8a1ab95" Feb 24 10:03:14 crc kubenswrapper[4755]: I0224 10:03:14.028735 4755 scope.go:117] "RemoveContainer" containerID="d8ca99e0460dc7e8f287447cffe7db5d085357becb24a02fc148cca2f8a1ab95" Feb 24 10:03:14 crc kubenswrapper[4755]: E0224 10:03:14.029266 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8ca99e0460dc7e8f287447cffe7db5d085357becb24a02fc148cca2f8a1ab95\": container with ID starting with d8ca99e0460dc7e8f287447cffe7db5d085357becb24a02fc148cca2f8a1ab95 not found: ID does not exist" containerID="d8ca99e0460dc7e8f287447cffe7db5d085357becb24a02fc148cca2f8a1ab95" Feb 24 10:03:14 crc kubenswrapper[4755]: I0224 10:03:14.029346 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ca99e0460dc7e8f287447cffe7db5d085357becb24a02fc148cca2f8a1ab95"} err="failed to get container status \"d8ca99e0460dc7e8f287447cffe7db5d085357becb24a02fc148cca2f8a1ab95\": rpc error: code = NotFound desc = could not find container \"d8ca99e0460dc7e8f287447cffe7db5d085357becb24a02fc148cca2f8a1ab95\": container with ID starting with d8ca99e0460dc7e8f287447cffe7db5d085357becb24a02fc148cca2f8a1ab95 not found: ID does not exist" Feb 24 10:03:14 crc kubenswrapper[4755]: I0224 10:03:14.046320 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7tncz"] Feb 24 10:03:14 crc kubenswrapper[4755]: I0224 10:03:14.055396 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7tncz"] Feb 24 10:03:14 crc kubenswrapper[4755]: I0224 10:03:14.332280 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="275884c2-8599-4867-97aa-04d67ba35182" path="/var/lib/kubelet/pods/275884c2-8599-4867-97aa-04d67ba35182/volumes" Feb 24 10:04:51 crc kubenswrapper[4755]: I0224 10:04:51.695057 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:04:51 crc kubenswrapper[4755]: I0224 10:04:51.695898 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:05:21 crc kubenswrapper[4755]: I0224 10:05:21.695663 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:05:21 crc kubenswrapper[4755]: I0224 10:05:21.696404 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:05:51 crc kubenswrapper[4755]: I0224 10:05:51.695441 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:05:51 crc kubenswrapper[4755]: I0224 10:05:51.696006 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:05:51 crc kubenswrapper[4755]: I0224 10:05:51.696060 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 10:05:51 crc kubenswrapper[4755]: I0224 10:05:51.697062 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1068b1d0f36c6f38acbfcad7f95ed44610ef96865713510472e65e78bddf609c"} pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 10:05:51 crc kubenswrapper[4755]: I0224 10:05:51.697190 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" containerID="cri-o://1068b1d0f36c6f38acbfcad7f95ed44610ef96865713510472e65e78bddf609c" gracePeriod=600 Feb 24 10:05:51 crc kubenswrapper[4755]: I0224 10:05:51.974955 4755 generic.go:334] "Generic (PLEG): container finished" podID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerID="1068b1d0f36c6f38acbfcad7f95ed44610ef96865713510472e65e78bddf609c" exitCode=0 Feb 24 10:05:51 crc kubenswrapper[4755]: I0224 10:05:51.975026 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerDied","Data":"1068b1d0f36c6f38acbfcad7f95ed44610ef96865713510472e65e78bddf609c"} Feb 24 10:05:51 crc kubenswrapper[4755]: I0224 10:05:51.975155 4755 scope.go:117] "RemoveContainer" containerID="6e30cd3f07dec468034b68f42462aa6b03bd99da31c7eea2aa712e6bd5b08ae2" Feb 24 10:05:53 crc kubenswrapper[4755]: I0224 10:05:53.993621 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"4c3d2ec1f41e0bfd106b66de040fc84e954e7115f3726ec67eeb0571aad4eeac"} Feb 24 10:06:36 crc kubenswrapper[4755]: I0224 10:06:36.073168 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53130: no serving certificate available for the kubelet" Feb 24 10:08:21 crc kubenswrapper[4755]: I0224 10:08:21.695375 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:08:21 crc kubenswrapper[4755]: I0224 10:08:21.695956 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:08:51 crc kubenswrapper[4755]: I0224 10:08:51.695357 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:08:51 crc kubenswrapper[4755]: I0224 10:08:51.696527 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:08:55 crc kubenswrapper[4755]: I0224 10:08:55.993932 4755 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 10:09:21 crc kubenswrapper[4755]: I0224 10:09:21.695400 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:09:21 crc kubenswrapper[4755]: I0224 10:09:21.696244 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:09:21 crc kubenswrapper[4755]: I0224 10:09:21.696320 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 10:09:21 crc kubenswrapper[4755]: I0224 10:09:21.697034 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c3d2ec1f41e0bfd106b66de040fc84e954e7115f3726ec67eeb0571aad4eeac"} pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 10:09:21 crc kubenswrapper[4755]: I0224 10:09:21.697158 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" containerID="cri-o://4c3d2ec1f41e0bfd106b66de040fc84e954e7115f3726ec67eeb0571aad4eeac" gracePeriod=600 Feb 24 10:09:22 crc kubenswrapper[4755]: I0224 10:09:22.581485 4755 generic.go:334] "Generic (PLEG): container finished" podID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerID="4c3d2ec1f41e0bfd106b66de040fc84e954e7115f3726ec67eeb0571aad4eeac" exitCode=0 Feb 24 10:09:22 crc kubenswrapper[4755]: I0224 10:09:22.581584 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerDied","Data":"4c3d2ec1f41e0bfd106b66de040fc84e954e7115f3726ec67eeb0571aad4eeac"} Feb 24 10:09:22 crc kubenswrapper[4755]: I0224 10:09:22.581871 4755 scope.go:117] "RemoveContainer" containerID="1068b1d0f36c6f38acbfcad7f95ed44610ef96865713510472e65e78bddf609c" Feb 24 10:09:23 crc kubenswrapper[4755]: I0224 10:09:23.590386 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"a16cb28625dc72750f4129b2fe696ae5e7f59a186c5c1bd832753d622e238c1f"} Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.695198 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.695838 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.768238 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-brcqt"] Feb 24 10:11:51 crc kubenswrapper[4755]: E0224 10:11:51.768483 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275884c2-8599-4867-97aa-04d67ba35182" containerName="registry" Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.768504 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="275884c2-8599-4867-97aa-04d67ba35182" containerName="registry" Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.768621 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="275884c2-8599-4867-97aa-04d67ba35182" containerName="registry" Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.770161 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.778707 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brcqt"] Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.785883 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4da244c-0e7a-431c-b06c-ac36a497e1e5-utilities\") pod \"community-operators-brcqt\" (UID: \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\") " pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.785941 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2jrv\" (UniqueName: \"kubernetes.io/projected/f4da244c-0e7a-431c-b06c-ac36a497e1e5-kube-api-access-m2jrv\") pod \"community-operators-brcqt\" (UID: \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\") " pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.785989 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4da244c-0e7a-431c-b06c-ac36a497e1e5-catalog-content\") pod \"community-operators-brcqt\" (UID: \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\") " pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.887602 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4da244c-0e7a-431c-b06c-ac36a497e1e5-utilities\") pod \"community-operators-brcqt\" (UID: \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\") " pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.887645 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2jrv\" (UniqueName: \"kubernetes.io/projected/f4da244c-0e7a-431c-b06c-ac36a497e1e5-kube-api-access-m2jrv\") pod \"community-operators-brcqt\" (UID: \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\") " pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.887669 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4da244c-0e7a-431c-b06c-ac36a497e1e5-catalog-content\") pod \"community-operators-brcqt\" (UID: \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\") " pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.888330 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4da244c-0e7a-431c-b06c-ac36a497e1e5-catalog-content\") pod \"community-operators-brcqt\" (UID: \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\") " pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.888553 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4da244c-0e7a-431c-b06c-ac36a497e1e5-utilities\") pod \"community-operators-brcqt\" (UID: \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\") " pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:11:51 crc kubenswrapper[4755]: I0224 10:11:51.918375 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2jrv\" (UniqueName: \"kubernetes.io/projected/f4da244c-0e7a-431c-b06c-ac36a497e1e5-kube-api-access-m2jrv\") pod \"community-operators-brcqt\" (UID: \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\") " pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:11:52 crc kubenswrapper[4755]: I0224 10:11:52.091104 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:11:52 crc kubenswrapper[4755]: I0224 10:11:52.328738 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brcqt"] Feb 24 10:11:52 crc kubenswrapper[4755]: I0224 10:11:52.603277 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4da244c-0e7a-431c-b06c-ac36a497e1e5" containerID="3da0f9165c35ccd4e51ce559b461aed7cc76bf8bd5249996f0ae79f357e53a59" exitCode=0 Feb 24 10:11:52 crc kubenswrapper[4755]: I0224 10:11:52.603425 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brcqt" event={"ID":"f4da244c-0e7a-431c-b06c-ac36a497e1e5","Type":"ContainerDied","Data":"3da0f9165c35ccd4e51ce559b461aed7cc76bf8bd5249996f0ae79f357e53a59"} Feb 24 10:11:52 crc kubenswrapper[4755]: I0224 10:11:52.603647 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brcqt" event={"ID":"f4da244c-0e7a-431c-b06c-ac36a497e1e5","Type":"ContainerStarted","Data":"854054ca6f1d19089c006eb135e5863c8326a9ad35587f8aa2eebdc42c80c30f"} Feb 24 10:11:52 crc kubenswrapper[4755]: I0224 10:11:52.605587 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 10:11:53 crc kubenswrapper[4755]: I0224 10:11:53.612882 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brcqt" event={"ID":"f4da244c-0e7a-431c-b06c-ac36a497e1e5","Type":"ContainerStarted","Data":"b5fe4327b5a1b1a632163d9191cd93d84f897b157bc2c2b84e41893be4138505"} Feb 24 10:11:54 crc kubenswrapper[4755]: I0224 10:11:54.621167 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4da244c-0e7a-431c-b06c-ac36a497e1e5" containerID="b5fe4327b5a1b1a632163d9191cd93d84f897b157bc2c2b84e41893be4138505" exitCode=0 Feb 24 10:11:54 crc kubenswrapper[4755]: I0224 10:11:54.621218 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brcqt" event={"ID":"f4da244c-0e7a-431c-b06c-ac36a497e1e5","Type":"ContainerDied","Data":"b5fe4327b5a1b1a632163d9191cd93d84f897b157bc2c2b84e41893be4138505"} Feb 24 10:11:55 crc kubenswrapper[4755]: I0224 10:11:55.630170 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brcqt" event={"ID":"f4da244c-0e7a-431c-b06c-ac36a497e1e5","Type":"ContainerStarted","Data":"d2c1e3da4124cc1b33d5f07b589388f7253f5d8af32beb5254595c56350933f3"} Feb 24 10:11:55 crc kubenswrapper[4755]: I0224 10:11:55.656538 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-brcqt" podStartSLOduration=2.245834764 podStartE2EDuration="4.656511644s" podCreationTimestamp="2026-02-24 10:11:51 +0000 UTC" firstStartedPulling="2026-02-24 10:11:52.605381788 +0000 UTC m=+1017.061904331" lastFinishedPulling="2026-02-24 10:11:55.016058638 +0000 UTC m=+1019.472581211" observedRunningTime="2026-02-24 10:11:55.655511174 +0000 UTC m=+1020.112033777" watchObservedRunningTime="2026-02-24 10:11:55.656511644 +0000 UTC m=+1020.113034187" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.442449 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fljft"] Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.443552 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovn-controller" containerID="cri-o://a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7" gracePeriod=30 Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.443575 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="nbdb" containerID="cri-o://cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699" gracePeriod=30 Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.443975 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="sbdb" containerID="cri-o://91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794" gracePeriod=30 Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.444004 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321" gracePeriod=30 Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.443983 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="northd" containerID="cri-o://462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98" gracePeriod=30 Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.444098 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="kube-rbac-proxy-node" containerID="cri-o://b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba" gracePeriod=30 Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.444169 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovn-acl-logging" containerID="cri-o://87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e" gracePeriod=30 Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.525843 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" containerID="cri-o://10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c" gracePeriod=30 Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.665976 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dwm6v_79ca0953-3a40-45a2-9305-02272f036006/kube-multus/2.log" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.666542 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dwm6v_79ca0953-3a40-45a2-9305-02272f036006/kube-multus/1.log" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.666573 4755 generic.go:334] "Generic (PLEG): container finished" podID="79ca0953-3a40-45a2-9305-02272f036006" containerID="1c64c7bea57cafc2c6ca12a0770020f27ecb2ee1b557ba321c9d9cc8c526b2b7" exitCode=2 Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.666614 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dwm6v" event={"ID":"79ca0953-3a40-45a2-9305-02272f036006","Type":"ContainerDied","Data":"1c64c7bea57cafc2c6ca12a0770020f27ecb2ee1b557ba321c9d9cc8c526b2b7"} Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.666645 4755 scope.go:117] "RemoveContainer" containerID="2b92711a65ad6cebe8ab7873f779cbee9056647a29d6fa95b4000b68ed70322b" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.667038 4755 scope.go:117] "RemoveContainer" containerID="1c64c7bea57cafc2c6ca12a0770020f27ecb2ee1b557ba321c9d9cc8c526b2b7" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.669198 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovnkube-controller/3.log" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.670706 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovn-acl-logging/0.log" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.671269 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovn-controller/0.log" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.671636 4755 generic.go:334] "Generic (PLEG): container finished" podID="787109ef-edb9-4334-afc7-6197f57f444f" containerID="10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c" exitCode=0 Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.671652 4755 generic.go:334] "Generic (PLEG): container finished" podID="787109ef-edb9-4334-afc7-6197f57f444f" containerID="635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321" exitCode=0 Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.671659 4755 generic.go:334] "Generic (PLEG): container finished" podID="787109ef-edb9-4334-afc7-6197f57f444f" containerID="b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba" exitCode=0 Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.671666 4755 generic.go:334] "Generic (PLEG): container finished" podID="787109ef-edb9-4334-afc7-6197f57f444f" containerID="87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e" exitCode=143 Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.671673 4755 generic.go:334] "Generic (PLEG): container finished" podID="787109ef-edb9-4334-afc7-6197f57f444f" containerID="a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7" exitCode=143 Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.671685 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerDied","Data":"10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c"} Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.671700 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerDied","Data":"635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321"} Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.671709 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerDied","Data":"b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba"} Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.671718 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerDied","Data":"87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e"} Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.671726 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerDied","Data":"a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7"} Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.753112 4755 scope.go:117] "RemoveContainer" containerID="dd4a9a780d4c161dd7077ffe6890f5ed848af90cf0522093b8dbaecff93471e7" Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.760569 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699 is running failed: container process not found" containerID="cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.760620 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794 is running failed: container process not found" containerID="91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.760877 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699 is running failed: container process not found" containerID="cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.761907 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794 is running failed: container process not found" containerID="91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.762224 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794 is running failed: container process not found" containerID="91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.762274 4755 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="sbdb" Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.762362 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699 is running failed: container process not found" containerID="cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.762398 4755 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="nbdb" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.774746 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovn-acl-logging/0.log" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.775728 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovn-controller/0.log" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.776346 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.843349 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dj8zh"] Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.843647 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="kube-rbac-proxy-node" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.843668 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="kube-rbac-proxy-node" Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.843685 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="sbdb" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.843697 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="sbdb" Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.843718 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.843731 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.843745 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.843757 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.843772 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="nbdb" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.843795 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="nbdb" Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.843809 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.843821 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.843836 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovn-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.843848 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovn-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.843867 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="kubecfg-setup" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.843881 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="kubecfg-setup" Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.843898 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.843911 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.843939 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.843951 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.843969 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovn-acl-logging" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.843982 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovn-acl-logging" Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.844000 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="northd" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.844012 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="northd" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.844329 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.844349 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="northd" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.844370 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.844386 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.844404 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="sbdb" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.844420 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="nbdb" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.844437 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovn-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.844451 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovn-acl-logging" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.844469 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.844486 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.844500 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="kube-rbac-proxy-node" Feb 24 10:12:01 crc kubenswrapper[4755]: E0224 10:12:01.844664 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.844678 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.844846 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="787109ef-edb9-4334-afc7-6197f57f444f" containerName="ovnkube-controller" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.847700 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.918700 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-etc-openvswitch\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919037 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-ovnkube-script-lib\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919085 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-systemd-units\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.918939 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919118 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-openvswitch\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919142 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-node-log\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919165 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-slash\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919189 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-kubelet\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919210 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-run-ovn-kubernetes\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919214 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919242 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-systemd\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919302 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919296 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919331 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-log-socket\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919315 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919382 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxn7g\" (UniqueName: \"kubernetes.io/projected/787109ef-edb9-4334-afc7-6197f57f444f-kube-api-access-kxn7g\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919360 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-log-socket" (OuterVolumeSpecName: "log-socket") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919338 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-slash" (OuterVolumeSpecName: "host-slash") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919370 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-node-log" (OuterVolumeSpecName: "node-log") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919475 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-var-lib-openvswitch\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919527 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-cni-bin\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919564 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919589 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919612 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-ovnkube-config\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919627 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919643 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919667 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-cni-netd\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919729 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/787109ef-edb9-4334-afc7-6197f57f444f-ovn-node-metrics-cert\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919752 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919772 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-env-overrides\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919818 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-ovn\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919859 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-run-netns\") pod \"787109ef-edb9-4334-afc7-6197f57f444f\" (UID: \"787109ef-edb9-4334-afc7-6197f57f444f\") " Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.919941 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920127 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920766 4755 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920794 4755 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920804 4755 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-log-socket\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920813 4755 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920822 4755 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920830 4755 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920839 4755 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920848 4755 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920856 4755 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920865 4755 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920874 4755 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920884 4755 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920891 4755 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-node-log\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.920898 4755 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-host-slash\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.921024 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.921354 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.922221 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.928711 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787109ef-edb9-4334-afc7-6197f57f444f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.928711 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787109ef-edb9-4334-afc7-6197f57f444f-kube-api-access-kxn7g" (OuterVolumeSpecName: "kube-api-access-kxn7g") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "kube-api-access-kxn7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:12:01 crc kubenswrapper[4755]: I0224 10:12:01.936872 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "787109ef-edb9-4334-afc7-6197f57f444f" (UID: "787109ef-edb9-4334-afc7-6197f57f444f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022103 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tvnt\" (UniqueName: \"kubernetes.io/projected/62a8540c-7b42-4874-8a5a-ab648f039546-kube-api-access-6tvnt\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022186 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-run-openvswitch\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022230 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-etc-openvswitch\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022270 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/62a8540c-7b42-4874-8a5a-ab648f039546-ovn-node-metrics-cert\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022298 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-run-netns\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022329 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-var-lib-openvswitch\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022372 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-run-ovn-kubernetes\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022397 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-cni-netd\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022421 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022444 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/62a8540c-7b42-4874-8a5a-ab648f039546-env-overrides\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022472 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-slash\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022642 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-kubelet\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022719 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-run-ovn\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022760 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-node-log\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022825 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/62a8540c-7b42-4874-8a5a-ab648f039546-ovnkube-config\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022895 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-run-systemd\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.022984 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/62a8540c-7b42-4874-8a5a-ab648f039546-ovnkube-script-lib\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.023052 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-cni-bin\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.023137 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-log-socket\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.023172 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-systemd-units\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.023235 4755 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.023260 4755 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/787109ef-edb9-4334-afc7-6197f57f444f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.023277 4755 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.023294 4755 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/787109ef-edb9-4334-afc7-6197f57f444f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.023312 4755 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/787109ef-edb9-4334-afc7-6197f57f444f-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.023327 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxn7g\" (UniqueName: \"kubernetes.io/projected/787109ef-edb9-4334-afc7-6197f57f444f-kube-api-access-kxn7g\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.091247 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.091426 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.125954 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-log-socket\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126003 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-systemd-units\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126031 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tvnt\" (UniqueName: \"kubernetes.io/projected/62a8540c-7b42-4874-8a5a-ab648f039546-kube-api-access-6tvnt\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126049 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-run-openvswitch\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126081 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-etc-openvswitch\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126099 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/62a8540c-7b42-4874-8a5a-ab648f039546-ovn-node-metrics-cert\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126114 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-run-netns\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126129 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-var-lib-openvswitch\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126153 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-run-ovn-kubernetes\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126168 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-cni-netd\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126184 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126201 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/62a8540c-7b42-4874-8a5a-ab648f039546-env-overrides\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126159 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-log-socket\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126269 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-slash\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126340 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-run-ovn-kubernetes\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126245 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-slash\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126362 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-cni-netd\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126385 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126410 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-systemd-units\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126833 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/62a8540c-7b42-4874-8a5a-ab648f039546-env-overrides\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126903 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-run-netns\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.126958 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-var-lib-openvswitch\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.127006 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-run-openvswitch\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.127055 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-kubelet\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.127055 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-etc-openvswitch\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.127150 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-kubelet\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.127172 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-run-ovn\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.127280 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-node-log\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.127342 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/62a8540c-7b42-4874-8a5a-ab648f039546-ovnkube-config\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.127392 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-run-ovn\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.127412 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-run-systemd\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.127459 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-node-log\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.127473 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/62a8540c-7b42-4874-8a5a-ab648f039546-ovnkube-script-lib\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.127525 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-cni-bin\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.127648 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-host-cni-bin\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.127716 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/62a8540c-7b42-4874-8a5a-ab648f039546-run-systemd\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.128377 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/62a8540c-7b42-4874-8a5a-ab648f039546-ovnkube-config\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.128996 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/62a8540c-7b42-4874-8a5a-ab648f039546-ovnkube-script-lib\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.129850 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/62a8540c-7b42-4874-8a5a-ab648f039546-ovn-node-metrics-cert\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.149672 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.155688 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tvnt\" (UniqueName: \"kubernetes.io/projected/62a8540c-7b42-4874-8a5a-ab648f039546-kube-api-access-6tvnt\") pod \"ovnkube-node-dj8zh\" (UID: \"62a8540c-7b42-4874-8a5a-ab648f039546\") " pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.163226 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:02 crc kubenswrapper[4755]: W0224 10:12:02.206356 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62a8540c_7b42_4874_8a5a_ab648f039546.slice/crio-a1ba05c217e5e7bfd906447180c91e1c37e9773306da5e07111e052d78c4ca06 WatchSource:0}: Error finding container a1ba05c217e5e7bfd906447180c91e1c37e9773306da5e07111e052d78c4ca06: Status 404 returned error can't find the container with id a1ba05c217e5e7bfd906447180c91e1c37e9773306da5e07111e052d78c4ca06 Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.688037 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dwm6v_79ca0953-3a40-45a2-9305-02272f036006/kube-multus/2.log" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.688576 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dwm6v" event={"ID":"79ca0953-3a40-45a2-9305-02272f036006","Type":"ContainerStarted","Data":"dec9a14817af1c9fea46255fa48a725ee228d6a8789d436ca48acaf0e41a3a6d"} Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.692573 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" event={"ID":"62a8540c-7b42-4874-8a5a-ab648f039546","Type":"ContainerDied","Data":"6283ab255baf65c5ae7a19913e0c04fb22498ab252f57f6e50f9ff1b70e1542d"} Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.692487 4755 generic.go:334] "Generic (PLEG): container finished" podID="62a8540c-7b42-4874-8a5a-ab648f039546" containerID="6283ab255baf65c5ae7a19913e0c04fb22498ab252f57f6e50f9ff1b70e1542d" exitCode=0 Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.692900 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" event={"ID":"62a8540c-7b42-4874-8a5a-ab648f039546","Type":"ContainerStarted","Data":"a1ba05c217e5e7bfd906447180c91e1c37e9773306da5e07111e052d78c4ca06"} Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.701750 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovn-acl-logging/0.log" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.702400 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fljft_787109ef-edb9-4334-afc7-6197f57f444f/ovn-controller/0.log" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.702894 4755 generic.go:334] "Generic (PLEG): container finished" podID="787109ef-edb9-4334-afc7-6197f57f444f" containerID="91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794" exitCode=0 Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.702925 4755 generic.go:334] "Generic (PLEG): container finished" podID="787109ef-edb9-4334-afc7-6197f57f444f" containerID="cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699" exitCode=0 Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.702938 4755 generic.go:334] "Generic (PLEG): container finished" podID="787109ef-edb9-4334-afc7-6197f57f444f" containerID="462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98" exitCode=0 Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.703884 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.704804 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerDied","Data":"91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794"} Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.704845 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerDied","Data":"cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699"} Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.704893 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerDied","Data":"462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98"} Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.704910 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fljft" event={"ID":"787109ef-edb9-4334-afc7-6197f57f444f","Type":"ContainerDied","Data":"07b43dba684f13da28a69ca000291cca1139ad30738590ad3cda0dad0590c7a5"} Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.704937 4755 scope.go:117] "RemoveContainer" containerID="10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.773832 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.774307 4755 scope.go:117] "RemoveContainer" containerID="91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.813756 4755 scope.go:117] "RemoveContainer" containerID="cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.851342 4755 scope.go:117] "RemoveContainer" containerID="462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.862385 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fljft"] Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.875737 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fljft"] Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.881334 4755 scope.go:117] "RemoveContainer" containerID="635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.883788 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brcqt"] Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.899852 4755 scope.go:117] "RemoveContainer" containerID="b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.917322 4755 scope.go:117] "RemoveContainer" containerID="87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.932621 4755 scope.go:117] "RemoveContainer" containerID="a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.968991 4755 scope.go:117] "RemoveContainer" containerID="c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.996273 4755 scope.go:117] "RemoveContainer" containerID="10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c" Feb 24 10:12:02 crc kubenswrapper[4755]: E0224 10:12:02.996824 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c\": container with ID starting with 10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c not found: ID does not exist" containerID="10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.996911 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c"} err="failed to get container status \"10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c\": rpc error: code = NotFound desc = could not find container \"10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c\": container with ID starting with 10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c not found: ID does not exist" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.996962 4755 scope.go:117] "RemoveContainer" containerID="91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794" Feb 24 10:12:02 crc kubenswrapper[4755]: E0224 10:12:02.997626 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\": container with ID starting with 91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794 not found: ID does not exist" containerID="91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.997678 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794"} err="failed to get container status \"91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\": rpc error: code = NotFound desc = could not find container \"91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\": container with ID starting with 91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794 not found: ID does not exist" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.997711 4755 scope.go:117] "RemoveContainer" containerID="cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699" Feb 24 10:12:02 crc kubenswrapper[4755]: E0224 10:12:02.998280 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\": container with ID starting with cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699 not found: ID does not exist" containerID="cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.998367 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699"} err="failed to get container status \"cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\": rpc error: code = NotFound desc = could not find container \"cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\": container with ID starting with cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699 not found: ID does not exist" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.998416 4755 scope.go:117] "RemoveContainer" containerID="462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98" Feb 24 10:12:02 crc kubenswrapper[4755]: E0224 10:12:02.999461 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\": container with ID starting with 462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98 not found: ID does not exist" containerID="462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.999527 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98"} err="failed to get container status \"462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\": rpc error: code = NotFound desc = could not find container \"462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\": container with ID starting with 462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98 not found: ID does not exist" Feb 24 10:12:02 crc kubenswrapper[4755]: I0224 10:12:02.999569 4755 scope.go:117] "RemoveContainer" containerID="635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321" Feb 24 10:12:03 crc kubenswrapper[4755]: E0224 10:12:03.000155 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\": container with ID starting with 635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321 not found: ID does not exist" containerID="635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.000224 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321"} err="failed to get container status \"635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\": rpc error: code = NotFound desc = could not find container \"635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\": container with ID starting with 635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.000255 4755 scope.go:117] "RemoveContainer" containerID="b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba" Feb 24 10:12:03 crc kubenswrapper[4755]: E0224 10:12:03.001614 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\": container with ID starting with b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba not found: ID does not exist" containerID="b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.001660 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba"} err="failed to get container status \"b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\": rpc error: code = NotFound desc = could not find container \"b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\": container with ID starting with b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.001695 4755 scope.go:117] "RemoveContainer" containerID="87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e" Feb 24 10:12:03 crc kubenswrapper[4755]: E0224 10:12:03.002101 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\": container with ID starting with 87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e not found: ID does not exist" containerID="87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.002140 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e"} err="failed to get container status \"87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\": rpc error: code = NotFound desc = could not find container \"87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\": container with ID starting with 87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.002165 4755 scope.go:117] "RemoveContainer" containerID="a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7" Feb 24 10:12:03 crc kubenswrapper[4755]: E0224 10:12:03.002540 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\": container with ID starting with a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7 not found: ID does not exist" containerID="a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.002592 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7"} err="failed to get container status \"a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\": rpc error: code = NotFound desc = could not find container \"a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\": container with ID starting with a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.002613 4755 scope.go:117] "RemoveContainer" containerID="c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584" Feb 24 10:12:03 crc kubenswrapper[4755]: E0224 10:12:03.003147 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\": container with ID starting with c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584 not found: ID does not exist" containerID="c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.003175 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584"} err="failed to get container status \"c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\": rpc error: code = NotFound desc = could not find container \"c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\": container with ID starting with c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.003196 4755 scope.go:117] "RemoveContainer" containerID="10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.003518 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c"} err="failed to get container status \"10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c\": rpc error: code = NotFound desc = could not find container \"10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c\": container with ID starting with 10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.003575 4755 scope.go:117] "RemoveContainer" containerID="91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.003896 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794"} err="failed to get container status \"91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\": rpc error: code = NotFound desc = could not find container \"91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\": container with ID starting with 91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.003928 4755 scope.go:117] "RemoveContainer" containerID="cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.004387 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699"} err="failed to get container status \"cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\": rpc error: code = NotFound desc = could not find container \"cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\": container with ID starting with cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.004414 4755 scope.go:117] "RemoveContainer" containerID="462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.004681 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98"} err="failed to get container status \"462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\": rpc error: code = NotFound desc = could not find container \"462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\": container with ID starting with 462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.004706 4755 scope.go:117] "RemoveContainer" containerID="635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.004967 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321"} err="failed to get container status \"635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\": rpc error: code = NotFound desc = could not find container \"635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\": container with ID starting with 635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.004996 4755 scope.go:117] "RemoveContainer" containerID="b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.005386 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba"} err="failed to get container status \"b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\": rpc error: code = NotFound desc = could not find container \"b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\": container with ID starting with b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.005424 4755 scope.go:117] "RemoveContainer" containerID="87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.006100 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e"} err="failed to get container status \"87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\": rpc error: code = NotFound desc = could not find container \"87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\": container with ID starting with 87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.006154 4755 scope.go:117] "RemoveContainer" containerID="a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.006458 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7"} err="failed to get container status \"a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\": rpc error: code = NotFound desc = could not find container \"a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\": container with ID starting with a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.006489 4755 scope.go:117] "RemoveContainer" containerID="c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.006869 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584"} err="failed to get container status \"c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\": rpc error: code = NotFound desc = could not find container \"c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\": container with ID starting with c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.006901 4755 scope.go:117] "RemoveContainer" containerID="10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.007176 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c"} err="failed to get container status \"10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c\": rpc error: code = NotFound desc = could not find container \"10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c\": container with ID starting with 10683c06882955ac04e9efcd515108d58cda8c44a2546a4be26bf589dd99f81c not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.007211 4755 scope.go:117] "RemoveContainer" containerID="91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.007697 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794"} err="failed to get container status \"91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\": rpc error: code = NotFound desc = could not find container \"91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794\": container with ID starting with 91de172184637dc4ed0c4dd0eca41872b62a69fbecaa88b20649feb0fc731794 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.007732 4755 scope.go:117] "RemoveContainer" containerID="cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.008478 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699"} err="failed to get container status \"cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\": rpc error: code = NotFound desc = could not find container \"cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699\": container with ID starting with cd67393484608663f198caaffab61492c2cc847a14c7e3f6d067a01f7727e699 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.008516 4755 scope.go:117] "RemoveContainer" containerID="462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.008970 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98"} err="failed to get container status \"462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\": rpc error: code = NotFound desc = could not find container \"462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98\": container with ID starting with 462b04a06a8c38b882e05ff51495dcbaa7a8b36c2aa8d7315ecec0ac191fae98 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.008993 4755 scope.go:117] "RemoveContainer" containerID="635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.009684 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321"} err="failed to get container status \"635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\": rpc error: code = NotFound desc = could not find container \"635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321\": container with ID starting with 635d07a9352c3399b703447538a97ba223a8d76e6deae27a6894f936a75bd321 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.009709 4755 scope.go:117] "RemoveContainer" containerID="b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.010015 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba"} err="failed to get container status \"b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\": rpc error: code = NotFound desc = could not find container \"b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba\": container with ID starting with b90534b7ec418bb0d66a81c464e9090a84f77268bf3879e1ce5f85e2e3c9f7ba not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.010039 4755 scope.go:117] "RemoveContainer" containerID="87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.010365 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e"} err="failed to get container status \"87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\": rpc error: code = NotFound desc = could not find container \"87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e\": container with ID starting with 87e347c8793f508223b4aa448dbaa0cf360643b163bb914d370d2c6c444ed92e not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.010390 4755 scope.go:117] "RemoveContainer" containerID="a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.010773 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7"} err="failed to get container status \"a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\": rpc error: code = NotFound desc = could not find container \"a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7\": container with ID starting with a4bd47ec8e4f8603cbef9a28d8849f8d7e422cdb038be1bd951163ad6ed266d7 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.010798 4755 scope.go:117] "RemoveContainer" containerID="c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.011085 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584"} err="failed to get container status \"c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\": rpc error: code = NotFound desc = could not find container \"c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584\": container with ID starting with c8fdf739d94f788458f95e022da6f6f918ab373fe981ddc3f295478fbc238584 not found: ID does not exist" Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.709925 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" event={"ID":"62a8540c-7b42-4874-8a5a-ab648f039546","Type":"ContainerStarted","Data":"91fa0c2b45a2e4ce99b0ca689925888396a986566a8f7557da9bc02cad1affa7"} Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.710410 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" event={"ID":"62a8540c-7b42-4874-8a5a-ab648f039546","Type":"ContainerStarted","Data":"57f60cd754235ac7b2a3da5b47dd3fed4d4fa2b5c769b386bb3f5f0a6a1b9f32"} Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.710423 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" event={"ID":"62a8540c-7b42-4874-8a5a-ab648f039546","Type":"ContainerStarted","Data":"cac55834f51946614e93e54917ed299509738b9b280282eb88ee9bfbef786fe7"} Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.710431 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" event={"ID":"62a8540c-7b42-4874-8a5a-ab648f039546","Type":"ContainerStarted","Data":"997c40aacbcf147639a1cdf47070ae996d791271b2b95db9c7efb63234c6f17f"} Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.710440 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" event={"ID":"62a8540c-7b42-4874-8a5a-ab648f039546","Type":"ContainerStarted","Data":"999ac33d80f31faf950f66289df191745b6f7d59a251a81ceaf0e13a60b55891"} Feb 24 10:12:03 crc kubenswrapper[4755]: I0224 10:12:03.710451 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" event={"ID":"62a8540c-7b42-4874-8a5a-ab648f039546","Type":"ContainerStarted","Data":"3264101b8b93f613125647898c332096a7bc9f2a75174e846f53eac6a5c7e435"} Feb 24 10:12:04 crc kubenswrapper[4755]: I0224 10:12:04.328243 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="787109ef-edb9-4334-afc7-6197f57f444f" path="/var/lib/kubelet/pods/787109ef-edb9-4334-afc7-6197f57f444f/volumes" Feb 24 10:12:04 crc kubenswrapper[4755]: I0224 10:12:04.717242 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-brcqt" podUID="f4da244c-0e7a-431c-b06c-ac36a497e1e5" containerName="registry-server" containerID="cri-o://d2c1e3da4124cc1b33d5f07b589388f7253f5d8af32beb5254595c56350933f3" gracePeriod=2 Feb 24 10:12:04 crc kubenswrapper[4755]: I0224 10:12:04.947281 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:12:04 crc kubenswrapper[4755]: I0224 10:12:04.965661 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4da244c-0e7a-431c-b06c-ac36a497e1e5-catalog-content\") pod \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\" (UID: \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\") " Feb 24 10:12:04 crc kubenswrapper[4755]: I0224 10:12:04.965775 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2jrv\" (UniqueName: \"kubernetes.io/projected/f4da244c-0e7a-431c-b06c-ac36a497e1e5-kube-api-access-m2jrv\") pod \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\" (UID: \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\") " Feb 24 10:12:04 crc kubenswrapper[4755]: I0224 10:12:04.965868 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4da244c-0e7a-431c-b06c-ac36a497e1e5-utilities\") pod \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\" (UID: \"f4da244c-0e7a-431c-b06c-ac36a497e1e5\") " Feb 24 10:12:04 crc kubenswrapper[4755]: I0224 10:12:04.972105 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4da244c-0e7a-431c-b06c-ac36a497e1e5-utilities" (OuterVolumeSpecName: "utilities") pod "f4da244c-0e7a-431c-b06c-ac36a497e1e5" (UID: "f4da244c-0e7a-431c-b06c-ac36a497e1e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:12:04 crc kubenswrapper[4755]: I0224 10:12:04.974666 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4da244c-0e7a-431c-b06c-ac36a497e1e5-kube-api-access-m2jrv" (OuterVolumeSpecName: "kube-api-access-m2jrv") pod "f4da244c-0e7a-431c-b06c-ac36a497e1e5" (UID: "f4da244c-0e7a-431c-b06c-ac36a497e1e5"). InnerVolumeSpecName "kube-api-access-m2jrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.036993 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4da244c-0e7a-431c-b06c-ac36a497e1e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4da244c-0e7a-431c-b06c-ac36a497e1e5" (UID: "f4da244c-0e7a-431c-b06c-ac36a497e1e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.067990 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4da244c-0e7a-431c-b06c-ac36a497e1e5-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.068037 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4da244c-0e7a-431c-b06c-ac36a497e1e5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.068061 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2jrv\" (UniqueName: \"kubernetes.io/projected/f4da244c-0e7a-431c-b06c-ac36a497e1e5-kube-api-access-m2jrv\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.726776 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4da244c-0e7a-431c-b06c-ac36a497e1e5" containerID="d2c1e3da4124cc1b33d5f07b589388f7253f5d8af32beb5254595c56350933f3" exitCode=0 Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.726831 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brcqt" event={"ID":"f4da244c-0e7a-431c-b06c-ac36a497e1e5","Type":"ContainerDied","Data":"d2c1e3da4124cc1b33d5f07b589388f7253f5d8af32beb5254595c56350933f3"} Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.726864 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brcqt" Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.726904 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brcqt" event={"ID":"f4da244c-0e7a-431c-b06c-ac36a497e1e5","Type":"ContainerDied","Data":"854054ca6f1d19089c006eb135e5863c8326a9ad35587f8aa2eebdc42c80c30f"} Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.726950 4755 scope.go:117] "RemoveContainer" containerID="d2c1e3da4124cc1b33d5f07b589388f7253f5d8af32beb5254595c56350933f3" Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.752507 4755 scope.go:117] "RemoveContainer" containerID="b5fe4327b5a1b1a632163d9191cd93d84f897b157bc2c2b84e41893be4138505" Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.772981 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brcqt"] Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.779488 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-brcqt"] Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.791417 4755 scope.go:117] "RemoveContainer" containerID="3da0f9165c35ccd4e51ce559b461aed7cc76bf8bd5249996f0ae79f357e53a59" Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.814573 4755 scope.go:117] "RemoveContainer" containerID="d2c1e3da4124cc1b33d5f07b589388f7253f5d8af32beb5254595c56350933f3" Feb 24 10:12:05 crc kubenswrapper[4755]: E0224 10:12:05.815035 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2c1e3da4124cc1b33d5f07b589388f7253f5d8af32beb5254595c56350933f3\": container with ID starting with d2c1e3da4124cc1b33d5f07b589388f7253f5d8af32beb5254595c56350933f3 not found: ID does not exist" containerID="d2c1e3da4124cc1b33d5f07b589388f7253f5d8af32beb5254595c56350933f3" Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.815161 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2c1e3da4124cc1b33d5f07b589388f7253f5d8af32beb5254595c56350933f3"} err="failed to get container status \"d2c1e3da4124cc1b33d5f07b589388f7253f5d8af32beb5254595c56350933f3\": rpc error: code = NotFound desc = could not find container \"d2c1e3da4124cc1b33d5f07b589388f7253f5d8af32beb5254595c56350933f3\": container with ID starting with d2c1e3da4124cc1b33d5f07b589388f7253f5d8af32beb5254595c56350933f3 not found: ID does not exist" Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.815193 4755 scope.go:117] "RemoveContainer" containerID="b5fe4327b5a1b1a632163d9191cd93d84f897b157bc2c2b84e41893be4138505" Feb 24 10:12:05 crc kubenswrapper[4755]: E0224 10:12:05.815707 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5fe4327b5a1b1a632163d9191cd93d84f897b157bc2c2b84e41893be4138505\": container with ID starting with b5fe4327b5a1b1a632163d9191cd93d84f897b157bc2c2b84e41893be4138505 not found: ID does not exist" containerID="b5fe4327b5a1b1a632163d9191cd93d84f897b157bc2c2b84e41893be4138505" Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.815745 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5fe4327b5a1b1a632163d9191cd93d84f897b157bc2c2b84e41893be4138505"} err="failed to get container status \"b5fe4327b5a1b1a632163d9191cd93d84f897b157bc2c2b84e41893be4138505\": rpc error: code = NotFound desc = could not find container \"b5fe4327b5a1b1a632163d9191cd93d84f897b157bc2c2b84e41893be4138505\": container with ID starting with b5fe4327b5a1b1a632163d9191cd93d84f897b157bc2c2b84e41893be4138505 not found: ID does not exist" Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.815772 4755 scope.go:117] "RemoveContainer" containerID="3da0f9165c35ccd4e51ce559b461aed7cc76bf8bd5249996f0ae79f357e53a59" Feb 24 10:12:05 crc kubenswrapper[4755]: E0224 10:12:05.816113 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da0f9165c35ccd4e51ce559b461aed7cc76bf8bd5249996f0ae79f357e53a59\": container with ID starting with 3da0f9165c35ccd4e51ce559b461aed7cc76bf8bd5249996f0ae79f357e53a59 not found: ID does not exist" containerID="3da0f9165c35ccd4e51ce559b461aed7cc76bf8bd5249996f0ae79f357e53a59" Feb 24 10:12:05 crc kubenswrapper[4755]: I0224 10:12:05.816149 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da0f9165c35ccd4e51ce559b461aed7cc76bf8bd5249996f0ae79f357e53a59"} err="failed to get container status \"3da0f9165c35ccd4e51ce559b461aed7cc76bf8bd5249996f0ae79f357e53a59\": rpc error: code = NotFound desc = could not find container \"3da0f9165c35ccd4e51ce559b461aed7cc76bf8bd5249996f0ae79f357e53a59\": container with ID starting with 3da0f9165c35ccd4e51ce559b461aed7cc76bf8bd5249996f0ae79f357e53a59 not found: ID does not exist" Feb 24 10:12:06 crc kubenswrapper[4755]: I0224 10:12:06.332214 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4da244c-0e7a-431c-b06c-ac36a497e1e5" path="/var/lib/kubelet/pods/f4da244c-0e7a-431c-b06c-ac36a497e1e5/volumes" Feb 24 10:12:06 crc kubenswrapper[4755]: I0224 10:12:06.740568 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" event={"ID":"62a8540c-7b42-4874-8a5a-ab648f039546","Type":"ContainerStarted","Data":"5e955dde233f9627153fc25ac207758e1d91e9677b2aa6f9cbf31d29177c6f1c"} Feb 24 10:12:07 crc kubenswrapper[4755]: I0224 10:12:07.805446 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-prsxg"] Feb 24 10:12:07 crc kubenswrapper[4755]: E0224 10:12:07.805785 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4da244c-0e7a-431c-b06c-ac36a497e1e5" containerName="extract-content" Feb 24 10:12:07 crc kubenswrapper[4755]: I0224 10:12:07.805810 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4da244c-0e7a-431c-b06c-ac36a497e1e5" containerName="extract-content" Feb 24 10:12:07 crc kubenswrapper[4755]: E0224 10:12:07.805839 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4da244c-0e7a-431c-b06c-ac36a497e1e5" containerName="registry-server" Feb 24 10:12:07 crc kubenswrapper[4755]: I0224 10:12:07.805851 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4da244c-0e7a-431c-b06c-ac36a497e1e5" containerName="registry-server" Feb 24 10:12:07 crc kubenswrapper[4755]: E0224 10:12:07.805869 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4da244c-0e7a-431c-b06c-ac36a497e1e5" containerName="extract-utilities" Feb 24 10:12:07 crc kubenswrapper[4755]: I0224 10:12:07.805882 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4da244c-0e7a-431c-b06c-ac36a497e1e5" containerName="extract-utilities" Feb 24 10:12:07 crc kubenswrapper[4755]: I0224 10:12:07.806143 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4da244c-0e7a-431c-b06c-ac36a497e1e5" containerName="registry-server" Feb 24 10:12:07 crc kubenswrapper[4755]: I0224 10:12:07.807486 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.008210 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48cqw\" (UniqueName: \"kubernetes.io/projected/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-kube-api-access-48cqw\") pod \"certified-operators-prsxg\" (UID: \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\") " pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.008491 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-catalog-content\") pod \"certified-operators-prsxg\" (UID: \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\") " pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.008560 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-utilities\") pod \"certified-operators-prsxg\" (UID: \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\") " pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.110126 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-utilities\") pod \"certified-operators-prsxg\" (UID: \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\") " pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.110272 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48cqw\" (UniqueName: \"kubernetes.io/projected/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-kube-api-access-48cqw\") pod \"certified-operators-prsxg\" (UID: \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\") " pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.110406 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-catalog-content\") pod \"certified-operators-prsxg\" (UID: \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\") " pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.110714 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-utilities\") pod \"certified-operators-prsxg\" (UID: \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\") " pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.111343 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-catalog-content\") pod \"certified-operators-prsxg\" (UID: \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\") " pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.134403 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48cqw\" (UniqueName: \"kubernetes.io/projected/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-kube-api-access-48cqw\") pod \"certified-operators-prsxg\" (UID: \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\") " pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.224305 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-r82wb"] Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.224911 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.226811 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.227287 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.227648 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.228039 4755 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-nm68l" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.413941 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8d85e886-5cc8-4859-9771-d155caf8682c-node-mnt\") pod \"crc-storage-crc-r82wb\" (UID: \"8d85e886-5cc8-4859-9771-d155caf8682c\") " pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.414132 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8d85e886-5cc8-4859-9771-d155caf8682c-crc-storage\") pod \"crc-storage-crc-r82wb\" (UID: \"8d85e886-5cc8-4859-9771-d155caf8682c\") " pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.414176 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h6tg\" (UniqueName: \"kubernetes.io/projected/8d85e886-5cc8-4859-9771-d155caf8682c-kube-api-access-9h6tg\") pod \"crc-storage-crc-r82wb\" (UID: \"8d85e886-5cc8-4859-9771-d155caf8682c\") " pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.426755 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:08 crc kubenswrapper[4755]: E0224 10:12:08.447924 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-prsxg_openshift-marketplace_864d4cf3-3c0d-4956-843d-5fa2f0fe057a_0(4f87994b2cdbe572c2f1dd15558b53c803e0cf34c383fecd0ad1e3cdfe882249): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 10:12:08 crc kubenswrapper[4755]: E0224 10:12:08.447993 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-prsxg_openshift-marketplace_864d4cf3-3c0d-4956-843d-5fa2f0fe057a_0(4f87994b2cdbe572c2f1dd15558b53c803e0cf34c383fecd0ad1e3cdfe882249): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:08 crc kubenswrapper[4755]: E0224 10:12:08.448015 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-prsxg_openshift-marketplace_864d4cf3-3c0d-4956-843d-5fa2f0fe057a_0(4f87994b2cdbe572c2f1dd15558b53c803e0cf34c383fecd0ad1e3cdfe882249): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:08 crc kubenswrapper[4755]: E0224 10:12:08.448074 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"certified-operators-prsxg_openshift-marketplace(864d4cf3-3c0d-4956-843d-5fa2f0fe057a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"certified-operators-prsxg_openshift-marketplace(864d4cf3-3c0d-4956-843d-5fa2f0fe057a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-prsxg_openshift-marketplace_864d4cf3-3c0d-4956-843d-5fa2f0fe057a_0(4f87994b2cdbe572c2f1dd15558b53c803e0cf34c383fecd0ad1e3cdfe882249): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/certified-operators-prsxg" podUID="864d4cf3-3c0d-4956-843d-5fa2f0fe057a" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.514895 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8d85e886-5cc8-4859-9771-d155caf8682c-crc-storage\") pod \"crc-storage-crc-r82wb\" (UID: \"8d85e886-5cc8-4859-9771-d155caf8682c\") " pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.514942 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h6tg\" (UniqueName: \"kubernetes.io/projected/8d85e886-5cc8-4859-9771-d155caf8682c-kube-api-access-9h6tg\") pod \"crc-storage-crc-r82wb\" (UID: \"8d85e886-5cc8-4859-9771-d155caf8682c\") " pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.514994 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8d85e886-5cc8-4859-9771-d155caf8682c-node-mnt\") pod \"crc-storage-crc-r82wb\" (UID: \"8d85e886-5cc8-4859-9771-d155caf8682c\") " pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.515289 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8d85e886-5cc8-4859-9771-d155caf8682c-node-mnt\") pod \"crc-storage-crc-r82wb\" (UID: \"8d85e886-5cc8-4859-9771-d155caf8682c\") " pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.516168 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8d85e886-5cc8-4859-9771-d155caf8682c-crc-storage\") pod \"crc-storage-crc-r82wb\" (UID: \"8d85e886-5cc8-4859-9771-d155caf8682c\") " pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.535600 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h6tg\" (UniqueName: \"kubernetes.io/projected/8d85e886-5cc8-4859-9771-d155caf8682c-kube-api-access-9h6tg\") pod \"crc-storage-crc-r82wb\" (UID: \"8d85e886-5cc8-4859-9771-d155caf8682c\") " pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.539088 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:08 crc kubenswrapper[4755]: E0224 10:12:08.555383 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-r82wb_crc-storage_8d85e886-5cc8-4859-9771-d155caf8682c_0(2edb156557a7b9ba0bb2061cf59645dd4a286d40eb6dba66db853d70e2bfc5d5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 10:12:08 crc kubenswrapper[4755]: E0224 10:12:08.555438 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-r82wb_crc-storage_8d85e886-5cc8-4859-9771-d155caf8682c_0(2edb156557a7b9ba0bb2061cf59645dd4a286d40eb6dba66db853d70e2bfc5d5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:08 crc kubenswrapper[4755]: E0224 10:12:08.555457 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-r82wb_crc-storage_8d85e886-5cc8-4859-9771-d155caf8682c_0(2edb156557a7b9ba0bb2061cf59645dd4a286d40eb6dba66db853d70e2bfc5d5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:08 crc kubenswrapper[4755]: E0224 10:12:08.555497 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-r82wb_crc-storage(8d85e886-5cc8-4859-9771-d155caf8682c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-r82wb_crc-storage(8d85e886-5cc8-4859-9771-d155caf8682c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-r82wb_crc-storage_8d85e886-5cc8-4859-9771-d155caf8682c_0(2edb156557a7b9ba0bb2061cf59645dd4a286d40eb6dba66db853d70e2bfc5d5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-r82wb" podUID="8d85e886-5cc8-4859-9771-d155caf8682c" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.758637 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" event={"ID":"62a8540c-7b42-4874-8a5a-ab648f039546","Type":"ContainerStarted","Data":"d1d6a61b6181a76d22145d90cc1a55b4f9ee48c979bf1d6550e9d8c4c31ccc2c"} Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.759044 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.759076 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.795019 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:08 crc kubenswrapper[4755]: I0224 10:12:08.803554 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" podStartSLOduration=7.803537508 podStartE2EDuration="7.803537508s" podCreationTimestamp="2026-02-24 10:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:12:08.802464114 +0000 UTC m=+1033.258986667" watchObservedRunningTime="2026-02-24 10:12:08.803537508 +0000 UTC m=+1033.260060051" Feb 24 10:12:09 crc kubenswrapper[4755]: I0224 10:12:09.283203 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-r82wb"] Feb 24 10:12:09 crc kubenswrapper[4755]: I0224 10:12:09.283298 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:09 crc kubenswrapper[4755]: I0224 10:12:09.283691 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:09 crc kubenswrapper[4755]: I0224 10:12:09.301446 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prsxg"] Feb 24 10:12:09 crc kubenswrapper[4755]: I0224 10:12:09.301560 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:09 crc kubenswrapper[4755]: I0224 10:12:09.302008 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:09 crc kubenswrapper[4755]: E0224 10:12:09.308012 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-r82wb_crc-storage_8d85e886-5cc8-4859-9771-d155caf8682c_0(30c6b9d06e44ccd8482ff53952a22289043a5801eb6f904ae7051d9555880651): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 10:12:09 crc kubenswrapper[4755]: E0224 10:12:09.308096 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-r82wb_crc-storage_8d85e886-5cc8-4859-9771-d155caf8682c_0(30c6b9d06e44ccd8482ff53952a22289043a5801eb6f904ae7051d9555880651): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:09 crc kubenswrapper[4755]: E0224 10:12:09.308127 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-r82wb_crc-storage_8d85e886-5cc8-4859-9771-d155caf8682c_0(30c6b9d06e44ccd8482ff53952a22289043a5801eb6f904ae7051d9555880651): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:09 crc kubenswrapper[4755]: E0224 10:12:09.308180 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-r82wb_crc-storage(8d85e886-5cc8-4859-9771-d155caf8682c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-r82wb_crc-storage(8d85e886-5cc8-4859-9771-d155caf8682c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-r82wb_crc-storage_8d85e886-5cc8-4859-9771-d155caf8682c_0(30c6b9d06e44ccd8482ff53952a22289043a5801eb6f904ae7051d9555880651): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-r82wb" podUID="8d85e886-5cc8-4859-9771-d155caf8682c" Feb 24 10:12:09 crc kubenswrapper[4755]: E0224 10:12:09.332587 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-prsxg_openshift-marketplace_864d4cf3-3c0d-4956-843d-5fa2f0fe057a_0(4bf5dc3c58ff941277749275f77a92b15c42ab06c98c1575c4013c8dcff439c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 10:12:09 crc kubenswrapper[4755]: E0224 10:12:09.332684 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-prsxg_openshift-marketplace_864d4cf3-3c0d-4956-843d-5fa2f0fe057a_0(4bf5dc3c58ff941277749275f77a92b15c42ab06c98c1575c4013c8dcff439c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:09 crc kubenswrapper[4755]: E0224 10:12:09.332721 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-prsxg_openshift-marketplace_864d4cf3-3c0d-4956-843d-5fa2f0fe057a_0(4bf5dc3c58ff941277749275f77a92b15c42ab06c98c1575c4013c8dcff439c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:09 crc kubenswrapper[4755]: E0224 10:12:09.332797 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"certified-operators-prsxg_openshift-marketplace(864d4cf3-3c0d-4956-843d-5fa2f0fe057a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"certified-operators-prsxg_openshift-marketplace(864d4cf3-3c0d-4956-843d-5fa2f0fe057a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-prsxg_openshift-marketplace_864d4cf3-3c0d-4956-843d-5fa2f0fe057a_0(4bf5dc3c58ff941277749275f77a92b15c42ab06c98c1575c4013c8dcff439c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/certified-operators-prsxg" podUID="864d4cf3-3c0d-4956-843d-5fa2f0fe057a" Feb 24 10:12:09 crc kubenswrapper[4755]: I0224 10:12:09.765963 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:09 crc kubenswrapper[4755]: I0224 10:12:09.817214 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:12 crc kubenswrapper[4755]: I0224 10:12:12.199676 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lcpzl"] Feb 24 10:12:12 crc kubenswrapper[4755]: I0224 10:12:12.202842 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:12 crc kubenswrapper[4755]: I0224 10:12:12.217942 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcpzl"] Feb 24 10:12:12 crc kubenswrapper[4755]: I0224 10:12:12.272317 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b6724a-e48a-4af3-9ea2-e34d46561dd4-catalog-content\") pod \"redhat-marketplace-lcpzl\" (UID: \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\") " pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:12 crc kubenswrapper[4755]: I0224 10:12:12.272411 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b6724a-e48a-4af3-9ea2-e34d46561dd4-utilities\") pod \"redhat-marketplace-lcpzl\" (UID: \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\") " pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:12 crc kubenswrapper[4755]: I0224 10:12:12.272508 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcjks\" (UniqueName: \"kubernetes.io/projected/64b6724a-e48a-4af3-9ea2-e34d46561dd4-kube-api-access-vcjks\") pod \"redhat-marketplace-lcpzl\" (UID: \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\") " pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:12 crc kubenswrapper[4755]: I0224 10:12:12.374012 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcjks\" (UniqueName: \"kubernetes.io/projected/64b6724a-e48a-4af3-9ea2-e34d46561dd4-kube-api-access-vcjks\") pod \"redhat-marketplace-lcpzl\" (UID: \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\") " pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:12 crc kubenswrapper[4755]: I0224 10:12:12.374118 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b6724a-e48a-4af3-9ea2-e34d46561dd4-catalog-content\") pod \"redhat-marketplace-lcpzl\" (UID: \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\") " pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:12 crc kubenswrapper[4755]: I0224 10:12:12.374168 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b6724a-e48a-4af3-9ea2-e34d46561dd4-utilities\") pod \"redhat-marketplace-lcpzl\" (UID: \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\") " pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:12 crc kubenswrapper[4755]: I0224 10:12:12.374660 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b6724a-e48a-4af3-9ea2-e34d46561dd4-utilities\") pod \"redhat-marketplace-lcpzl\" (UID: \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\") " pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:12 crc kubenswrapper[4755]: I0224 10:12:12.375290 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b6724a-e48a-4af3-9ea2-e34d46561dd4-catalog-content\") pod \"redhat-marketplace-lcpzl\" (UID: \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\") " pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:12 crc kubenswrapper[4755]: I0224 10:12:12.393764 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcjks\" (UniqueName: \"kubernetes.io/projected/64b6724a-e48a-4af3-9ea2-e34d46561dd4-kube-api-access-vcjks\") pod \"redhat-marketplace-lcpzl\" (UID: \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\") " pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:12 crc kubenswrapper[4755]: I0224 10:12:12.536041 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:12 crc kubenswrapper[4755]: I0224 10:12:12.939458 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcpzl"] Feb 24 10:12:12 crc kubenswrapper[4755]: W0224 10:12:12.946449 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64b6724a_e48a_4af3_9ea2_e34d46561dd4.slice/crio-9a8038b1b6ed26ad85181e95a6dcc3b4fafd44c074c0ab2b5909c60962f0954c WatchSource:0}: Error finding container 9a8038b1b6ed26ad85181e95a6dcc3b4fafd44c074c0ab2b5909c60962f0954c: Status 404 returned error can't find the container with id 9a8038b1b6ed26ad85181e95a6dcc3b4fafd44c074c0ab2b5909c60962f0954c Feb 24 10:12:13 crc kubenswrapper[4755]: I0224 10:12:13.795363 4755 generic.go:334] "Generic (PLEG): container finished" podID="64b6724a-e48a-4af3-9ea2-e34d46561dd4" containerID="e087d680d1510c076946115f94505045046f2fa6f187c553fffcb352df633e7c" exitCode=0 Feb 24 10:12:13 crc kubenswrapper[4755]: I0224 10:12:13.796740 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcpzl" event={"ID":"64b6724a-e48a-4af3-9ea2-e34d46561dd4","Type":"ContainerDied","Data":"e087d680d1510c076946115f94505045046f2fa6f187c553fffcb352df633e7c"} Feb 24 10:12:13 crc kubenswrapper[4755]: I0224 10:12:13.798315 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcpzl" event={"ID":"64b6724a-e48a-4af3-9ea2-e34d46561dd4","Type":"ContainerStarted","Data":"9a8038b1b6ed26ad85181e95a6dcc3b4fafd44c074c0ab2b5909c60962f0954c"} Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.003017 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xqkqm"] Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.004844 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.022375 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqkqm"] Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.095521 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991ae998-9883-47da-bf65-d4439a482124-utilities\") pod \"redhat-operators-xqkqm\" (UID: \"991ae998-9883-47da-bf65-d4439a482124\") " pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.095597 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppvfn\" (UniqueName: \"kubernetes.io/projected/991ae998-9883-47da-bf65-d4439a482124-kube-api-access-ppvfn\") pod \"redhat-operators-xqkqm\" (UID: \"991ae998-9883-47da-bf65-d4439a482124\") " pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.095791 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991ae998-9883-47da-bf65-d4439a482124-catalog-content\") pod \"redhat-operators-xqkqm\" (UID: \"991ae998-9883-47da-bf65-d4439a482124\") " pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.197292 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991ae998-9883-47da-bf65-d4439a482124-utilities\") pod \"redhat-operators-xqkqm\" (UID: \"991ae998-9883-47da-bf65-d4439a482124\") " pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.197410 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppvfn\" (UniqueName: \"kubernetes.io/projected/991ae998-9883-47da-bf65-d4439a482124-kube-api-access-ppvfn\") pod \"redhat-operators-xqkqm\" (UID: \"991ae998-9883-47da-bf65-d4439a482124\") " pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.197485 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991ae998-9883-47da-bf65-d4439a482124-catalog-content\") pod \"redhat-operators-xqkqm\" (UID: \"991ae998-9883-47da-bf65-d4439a482124\") " pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.197944 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991ae998-9883-47da-bf65-d4439a482124-utilities\") pod \"redhat-operators-xqkqm\" (UID: \"991ae998-9883-47da-bf65-d4439a482124\") " pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.198092 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991ae998-9883-47da-bf65-d4439a482124-catalog-content\") pod \"redhat-operators-xqkqm\" (UID: \"991ae998-9883-47da-bf65-d4439a482124\") " pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.223309 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppvfn\" (UniqueName: \"kubernetes.io/projected/991ae998-9883-47da-bf65-d4439a482124-kube-api-access-ppvfn\") pod \"redhat-operators-xqkqm\" (UID: \"991ae998-9883-47da-bf65-d4439a482124\") " pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.331566 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.620255 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqkqm"] Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.800826 4755 generic.go:334] "Generic (PLEG): container finished" podID="64b6724a-e48a-4af3-9ea2-e34d46561dd4" containerID="bad6c1f79b4233462273366fefa5cfcbf4e1f225a8f2c61723fd21b8d1b816a0" exitCode=0 Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.800894 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcpzl" event={"ID":"64b6724a-e48a-4af3-9ea2-e34d46561dd4","Type":"ContainerDied","Data":"bad6c1f79b4233462273366fefa5cfcbf4e1f225a8f2c61723fd21b8d1b816a0"} Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.803217 4755 generic.go:334] "Generic (PLEG): container finished" podID="991ae998-9883-47da-bf65-d4439a482124" containerID="f2e851ffd67b2103261b055ad10f65bd972fdc5b99f348e688d2dae315c416c0" exitCode=0 Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.803243 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqkqm" event={"ID":"991ae998-9883-47da-bf65-d4439a482124","Type":"ContainerDied","Data":"f2e851ffd67b2103261b055ad10f65bd972fdc5b99f348e688d2dae315c416c0"} Feb 24 10:12:14 crc kubenswrapper[4755]: I0224 10:12:14.803261 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqkqm" event={"ID":"991ae998-9883-47da-bf65-d4439a482124","Type":"ContainerStarted","Data":"919b8adc4ee75c30de409453755cc99236b13eea32fbc4120618ffcb085de9c5"} Feb 24 10:12:15 crc kubenswrapper[4755]: I0224 10:12:15.809932 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqkqm" event={"ID":"991ae998-9883-47da-bf65-d4439a482124","Type":"ContainerStarted","Data":"087c3d8bc79741a992af02c901cd7c46a6757a4b99d5ae1820edf6c89d280643"} Feb 24 10:12:15 crc kubenswrapper[4755]: I0224 10:12:15.812030 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcpzl" event={"ID":"64b6724a-e48a-4af3-9ea2-e34d46561dd4","Type":"ContainerStarted","Data":"f32552bbb99e3d43e6fd7d4b00ba00889b442ddbaf1edfb7a2ea3d9f77f95f72"} Feb 24 10:12:15 crc kubenswrapper[4755]: I0224 10:12:15.854375 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lcpzl" podStartSLOduration=2.267364787 podStartE2EDuration="3.854353559s" podCreationTimestamp="2026-02-24 10:12:12 +0000 UTC" firstStartedPulling="2026-02-24 10:12:13.799406455 +0000 UTC m=+1038.255929038" lastFinishedPulling="2026-02-24 10:12:15.386395277 +0000 UTC m=+1039.842917810" observedRunningTime="2026-02-24 10:12:15.849901639 +0000 UTC m=+1040.306424192" watchObservedRunningTime="2026-02-24 10:12:15.854353559 +0000 UTC m=+1040.310876112" Feb 24 10:12:16 crc kubenswrapper[4755]: I0224 10:12:16.822504 4755 generic.go:334] "Generic (PLEG): container finished" podID="991ae998-9883-47da-bf65-d4439a482124" containerID="087c3d8bc79741a992af02c901cd7c46a6757a4b99d5ae1820edf6c89d280643" exitCode=0 Feb 24 10:12:16 crc kubenswrapper[4755]: I0224 10:12:16.822597 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqkqm" event={"ID":"991ae998-9883-47da-bf65-d4439a482124","Type":"ContainerDied","Data":"087c3d8bc79741a992af02c901cd7c46a6757a4b99d5ae1820edf6c89d280643"} Feb 24 10:12:17 crc kubenswrapper[4755]: I0224 10:12:17.832354 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqkqm" event={"ID":"991ae998-9883-47da-bf65-d4439a482124","Type":"ContainerStarted","Data":"cd3d2334bccea4a5d7f5655a3e8eb96712ff78b460875a8b3048dab909a8b1ca"} Feb 24 10:12:17 crc kubenswrapper[4755]: I0224 10:12:17.862689 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xqkqm" podStartSLOduration=2.442119364 podStartE2EDuration="4.862667972s" podCreationTimestamp="2026-02-24 10:12:13 +0000 UTC" firstStartedPulling="2026-02-24 10:12:14.804274949 +0000 UTC m=+1039.260797492" lastFinishedPulling="2026-02-24 10:12:17.224823547 +0000 UTC m=+1041.681346100" observedRunningTime="2026-02-24 10:12:17.858408299 +0000 UTC m=+1042.314930882" watchObservedRunningTime="2026-02-24 10:12:17.862667972 +0000 UTC m=+1042.319190555" Feb 24 10:12:21 crc kubenswrapper[4755]: I0224 10:12:21.694448 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:12:21 crc kubenswrapper[4755]: I0224 10:12:21.694848 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:12:22 crc kubenswrapper[4755]: I0224 10:12:22.315907 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:22 crc kubenswrapper[4755]: I0224 10:12:22.316415 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:22 crc kubenswrapper[4755]: I0224 10:12:22.539293 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:22 crc kubenswrapper[4755]: I0224 10:12:22.539586 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:22 crc kubenswrapper[4755]: I0224 10:12:22.601934 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:22 crc kubenswrapper[4755]: I0224 10:12:22.607651 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-r82wb"] Feb 24 10:12:22 crc kubenswrapper[4755]: I0224 10:12:22.867434 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-r82wb" event={"ID":"8d85e886-5cc8-4859-9771-d155caf8682c","Type":"ContainerStarted","Data":"b3cf9d845720a912e2c667821b5b2df34583bedfb727fd82454a5ec159059238"} Feb 24 10:12:22 crc kubenswrapper[4755]: I0224 10:12:22.942422 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:23 crc kubenswrapper[4755]: I0224 10:12:23.846462 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcpzl"] Feb 24 10:12:24 crc kubenswrapper[4755]: I0224 10:12:24.316274 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:24 crc kubenswrapper[4755]: I0224 10:12:24.316985 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:24 crc kubenswrapper[4755]: I0224 10:12:24.332524 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:24 crc kubenswrapper[4755]: I0224 10:12:24.332592 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:24 crc kubenswrapper[4755]: I0224 10:12:24.847570 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prsxg"] Feb 24 10:12:24 crc kubenswrapper[4755]: W0224 10:12:24.855012 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod864d4cf3_3c0d_4956_843d_5fa2f0fe057a.slice/crio-6d311a774692c095a2cb04c66e8e4b1a30fb08e77e456443e746ce39c14e41c4 WatchSource:0}: Error finding container 6d311a774692c095a2cb04c66e8e4b1a30fb08e77e456443e746ce39c14e41c4: Status 404 returned error can't find the container with id 6d311a774692c095a2cb04c66e8e4b1a30fb08e77e456443e746ce39c14e41c4 Feb 24 10:12:24 crc kubenswrapper[4755]: I0224 10:12:24.892795 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prsxg" event={"ID":"864d4cf3-3c0d-4956-843d-5fa2f0fe057a","Type":"ContainerStarted","Data":"6d311a774692c095a2cb04c66e8e4b1a30fb08e77e456443e746ce39c14e41c4"} Feb 24 10:12:24 crc kubenswrapper[4755]: I0224 10:12:24.895695 4755 generic.go:334] "Generic (PLEG): container finished" podID="8d85e886-5cc8-4859-9771-d155caf8682c" containerID="b0ae6317a8b2381752c314c7364eb067d767666f06fd8476d63c437c16c13e8c" exitCode=0 Feb 24 10:12:24 crc kubenswrapper[4755]: I0224 10:12:24.896028 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lcpzl" podUID="64b6724a-e48a-4af3-9ea2-e34d46561dd4" containerName="registry-server" containerID="cri-o://f32552bbb99e3d43e6fd7d4b00ba00889b442ddbaf1edfb7a2ea3d9f77f95f72" gracePeriod=2 Feb 24 10:12:24 crc kubenswrapper[4755]: I0224 10:12:24.896563 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-r82wb" event={"ID":"8d85e886-5cc8-4859-9771-d155caf8682c","Type":"ContainerDied","Data":"b0ae6317a8b2381752c314c7364eb067d767666f06fd8476d63c437c16c13e8c"} Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.314476 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.394462 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xqkqm" podUID="991ae998-9883-47da-bf65-d4439a482124" containerName="registry-server" probeResult="failure" output=< Feb 24 10:12:25 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Feb 24 10:12:25 crc kubenswrapper[4755]: > Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.451844 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b6724a-e48a-4af3-9ea2-e34d46561dd4-utilities\") pod \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\" (UID: \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\") " Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.451902 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b6724a-e48a-4af3-9ea2-e34d46561dd4-catalog-content\") pod \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\" (UID: \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\") " Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.451958 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcjks\" (UniqueName: \"kubernetes.io/projected/64b6724a-e48a-4af3-9ea2-e34d46561dd4-kube-api-access-vcjks\") pod \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\" (UID: \"64b6724a-e48a-4af3-9ea2-e34d46561dd4\") " Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.454145 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64b6724a-e48a-4af3-9ea2-e34d46561dd4-utilities" (OuterVolumeSpecName: "utilities") pod "64b6724a-e48a-4af3-9ea2-e34d46561dd4" (UID: "64b6724a-e48a-4af3-9ea2-e34d46561dd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.458270 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64b6724a-e48a-4af3-9ea2-e34d46561dd4-kube-api-access-vcjks" (OuterVolumeSpecName: "kube-api-access-vcjks") pod "64b6724a-e48a-4af3-9ea2-e34d46561dd4" (UID: "64b6724a-e48a-4af3-9ea2-e34d46561dd4"). InnerVolumeSpecName "kube-api-access-vcjks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.489525 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64b6724a-e48a-4af3-9ea2-e34d46561dd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64b6724a-e48a-4af3-9ea2-e34d46561dd4" (UID: "64b6724a-e48a-4af3-9ea2-e34d46561dd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.553661 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b6724a-e48a-4af3-9ea2-e34d46561dd4-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.553762 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b6724a-e48a-4af3-9ea2-e34d46561dd4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.554170 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcjks\" (UniqueName: \"kubernetes.io/projected/64b6724a-e48a-4af3-9ea2-e34d46561dd4-kube-api-access-vcjks\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.906127 4755 generic.go:334] "Generic (PLEG): container finished" podID="64b6724a-e48a-4af3-9ea2-e34d46561dd4" containerID="f32552bbb99e3d43e6fd7d4b00ba00889b442ddbaf1edfb7a2ea3d9f77f95f72" exitCode=0 Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.906221 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcpzl" Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.906258 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcpzl" event={"ID":"64b6724a-e48a-4af3-9ea2-e34d46561dd4","Type":"ContainerDied","Data":"f32552bbb99e3d43e6fd7d4b00ba00889b442ddbaf1edfb7a2ea3d9f77f95f72"} Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.906339 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcpzl" event={"ID":"64b6724a-e48a-4af3-9ea2-e34d46561dd4","Type":"ContainerDied","Data":"9a8038b1b6ed26ad85181e95a6dcc3b4fafd44c074c0ab2b5909c60962f0954c"} Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.906370 4755 scope.go:117] "RemoveContainer" containerID="f32552bbb99e3d43e6fd7d4b00ba00889b442ddbaf1edfb7a2ea3d9f77f95f72" Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.908445 4755 generic.go:334] "Generic (PLEG): container finished" podID="864d4cf3-3c0d-4956-843d-5fa2f0fe057a" containerID="2bf80d0c75f8db63a365f73f9e15288e17a08ee1d3a5243c80d0207b6cea77dd" exitCode=0 Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.908616 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prsxg" event={"ID":"864d4cf3-3c0d-4956-843d-5fa2f0fe057a","Type":"ContainerDied","Data":"2bf80d0c75f8db63a365f73f9e15288e17a08ee1d3a5243c80d0207b6cea77dd"} Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.943796 4755 scope.go:117] "RemoveContainer" containerID="bad6c1f79b4233462273366fefa5cfcbf4e1f225a8f2c61723fd21b8d1b816a0" Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.994747 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcpzl"] Feb 24 10:12:25 crc kubenswrapper[4755]: I0224 10:12:25.996213 4755 scope.go:117] "RemoveContainer" containerID="e087d680d1510c076946115f94505045046f2fa6f187c553fffcb352df633e7c" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.000963 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcpzl"] Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.031855 4755 scope.go:117] "RemoveContainer" containerID="f32552bbb99e3d43e6fd7d4b00ba00889b442ddbaf1edfb7a2ea3d9f77f95f72" Feb 24 10:12:26 crc kubenswrapper[4755]: E0224 10:12:26.032387 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f32552bbb99e3d43e6fd7d4b00ba00889b442ddbaf1edfb7a2ea3d9f77f95f72\": container with ID starting with f32552bbb99e3d43e6fd7d4b00ba00889b442ddbaf1edfb7a2ea3d9f77f95f72 not found: ID does not exist" containerID="f32552bbb99e3d43e6fd7d4b00ba00889b442ddbaf1edfb7a2ea3d9f77f95f72" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.032431 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32552bbb99e3d43e6fd7d4b00ba00889b442ddbaf1edfb7a2ea3d9f77f95f72"} err="failed to get container status \"f32552bbb99e3d43e6fd7d4b00ba00889b442ddbaf1edfb7a2ea3d9f77f95f72\": rpc error: code = NotFound desc = could not find container \"f32552bbb99e3d43e6fd7d4b00ba00889b442ddbaf1edfb7a2ea3d9f77f95f72\": container with ID starting with f32552bbb99e3d43e6fd7d4b00ba00889b442ddbaf1edfb7a2ea3d9f77f95f72 not found: ID does not exist" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.032465 4755 scope.go:117] "RemoveContainer" containerID="bad6c1f79b4233462273366fefa5cfcbf4e1f225a8f2c61723fd21b8d1b816a0" Feb 24 10:12:26 crc kubenswrapper[4755]: E0224 10:12:26.033089 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bad6c1f79b4233462273366fefa5cfcbf4e1f225a8f2c61723fd21b8d1b816a0\": container with ID starting with bad6c1f79b4233462273366fefa5cfcbf4e1f225a8f2c61723fd21b8d1b816a0 not found: ID does not exist" containerID="bad6c1f79b4233462273366fefa5cfcbf4e1f225a8f2c61723fd21b8d1b816a0" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.033123 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bad6c1f79b4233462273366fefa5cfcbf4e1f225a8f2c61723fd21b8d1b816a0"} err="failed to get container status \"bad6c1f79b4233462273366fefa5cfcbf4e1f225a8f2c61723fd21b8d1b816a0\": rpc error: code = NotFound desc = could not find container \"bad6c1f79b4233462273366fefa5cfcbf4e1f225a8f2c61723fd21b8d1b816a0\": container with ID starting with bad6c1f79b4233462273366fefa5cfcbf4e1f225a8f2c61723fd21b8d1b816a0 not found: ID does not exist" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.033150 4755 scope.go:117] "RemoveContainer" containerID="e087d680d1510c076946115f94505045046f2fa6f187c553fffcb352df633e7c" Feb 24 10:12:26 crc kubenswrapper[4755]: E0224 10:12:26.033554 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e087d680d1510c076946115f94505045046f2fa6f187c553fffcb352df633e7c\": container with ID starting with e087d680d1510c076946115f94505045046f2fa6f187c553fffcb352df633e7c not found: ID does not exist" containerID="e087d680d1510c076946115f94505045046f2fa6f187c553fffcb352df633e7c" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.033582 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e087d680d1510c076946115f94505045046f2fa6f187c553fffcb352df633e7c"} err="failed to get container status \"e087d680d1510c076946115f94505045046f2fa6f187c553fffcb352df633e7c\": rpc error: code = NotFound desc = could not find container \"e087d680d1510c076946115f94505045046f2fa6f187c553fffcb352df633e7c\": container with ID starting with e087d680d1510c076946115f94505045046f2fa6f187c553fffcb352df633e7c not found: ID does not exist" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.257820 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.263041 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h6tg\" (UniqueName: \"kubernetes.io/projected/8d85e886-5cc8-4859-9771-d155caf8682c-kube-api-access-9h6tg\") pod \"8d85e886-5cc8-4859-9771-d155caf8682c\" (UID: \"8d85e886-5cc8-4859-9771-d155caf8682c\") " Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.263140 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8d85e886-5cc8-4859-9771-d155caf8682c-node-mnt\") pod \"8d85e886-5cc8-4859-9771-d155caf8682c\" (UID: \"8d85e886-5cc8-4859-9771-d155caf8682c\") " Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.263243 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8d85e886-5cc8-4859-9771-d155caf8682c-crc-storage\") pod \"8d85e886-5cc8-4859-9771-d155caf8682c\" (UID: \"8d85e886-5cc8-4859-9771-d155caf8682c\") " Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.263369 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d85e886-5cc8-4859-9771-d155caf8682c-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "8d85e886-5cc8-4859-9771-d155caf8682c" (UID: "8d85e886-5cc8-4859-9771-d155caf8682c"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.263509 4755 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8d85e886-5cc8-4859-9771-d155caf8682c-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.268215 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d85e886-5cc8-4859-9771-d155caf8682c-kube-api-access-9h6tg" (OuterVolumeSpecName: "kube-api-access-9h6tg") pod "8d85e886-5cc8-4859-9771-d155caf8682c" (UID: "8d85e886-5cc8-4859-9771-d155caf8682c"). InnerVolumeSpecName "kube-api-access-9h6tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.294539 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d85e886-5cc8-4859-9771-d155caf8682c-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "8d85e886-5cc8-4859-9771-d155caf8682c" (UID: "8d85e886-5cc8-4859-9771-d155caf8682c"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.323951 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64b6724a-e48a-4af3-9ea2-e34d46561dd4" path="/var/lib/kubelet/pods/64b6724a-e48a-4af3-9ea2-e34d46561dd4/volumes" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.364087 4755 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8d85e886-5cc8-4859-9771-d155caf8682c-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.364132 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h6tg\" (UniqueName: \"kubernetes.io/projected/8d85e886-5cc8-4859-9771-d155caf8682c-kube-api-access-9h6tg\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.915804 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-r82wb" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.915805 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-r82wb" event={"ID":"8d85e886-5cc8-4859-9771-d155caf8682c","Type":"ContainerDied","Data":"b3cf9d845720a912e2c667821b5b2df34583bedfb727fd82454a5ec159059238"} Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.915958 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cf9d845720a912e2c667821b5b2df34583bedfb727fd82454a5ec159059238" Feb 24 10:12:26 crc kubenswrapper[4755]: I0224 10:12:26.922209 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prsxg" event={"ID":"864d4cf3-3c0d-4956-843d-5fa2f0fe057a","Type":"ContainerStarted","Data":"314a578f42393fc2e3e05698990115e60e3bb3c4913541f1ad246b2e5b2b7a62"} Feb 24 10:12:27 crc kubenswrapper[4755]: I0224 10:12:27.936405 4755 generic.go:334] "Generic (PLEG): container finished" podID="864d4cf3-3c0d-4956-843d-5fa2f0fe057a" containerID="314a578f42393fc2e3e05698990115e60e3bb3c4913541f1ad246b2e5b2b7a62" exitCode=0 Feb 24 10:12:27 crc kubenswrapper[4755]: I0224 10:12:27.936611 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prsxg" event={"ID":"864d4cf3-3c0d-4956-843d-5fa2f0fe057a","Type":"ContainerDied","Data":"314a578f42393fc2e3e05698990115e60e3bb3c4913541f1ad246b2e5b2b7a62"} Feb 24 10:12:28 crc kubenswrapper[4755]: I0224 10:12:28.943815 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prsxg" event={"ID":"864d4cf3-3c0d-4956-843d-5fa2f0fe057a","Type":"ContainerStarted","Data":"48f79a137f8d0acc84398c83457861a2b760002943156d76958a423db76cff3c"} Feb 24 10:12:28 crc kubenswrapper[4755]: I0224 10:12:28.969544 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-prsxg" podStartSLOduration=19.539947731 podStartE2EDuration="21.969513744s" podCreationTimestamp="2026-02-24 10:12:07 +0000 UTC" firstStartedPulling="2026-02-24 10:12:25.910922023 +0000 UTC m=+1050.367444606" lastFinishedPulling="2026-02-24 10:12:28.340488036 +0000 UTC m=+1052.797010619" observedRunningTime="2026-02-24 10:12:28.968158891 +0000 UTC m=+1053.424681484" watchObservedRunningTime="2026-02-24 10:12:28.969513744 +0000 UTC m=+1053.426036327" Feb 24 10:12:32 crc kubenswrapper[4755]: I0224 10:12:32.201341 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dj8zh" Feb 24 10:12:34 crc kubenswrapper[4755]: I0224 10:12:34.387500 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:34 crc kubenswrapper[4755]: I0224 10:12:34.423593 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.437392 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c"] Feb 24 10:12:35 crc kubenswrapper[4755]: E0224 10:12:35.437767 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d85e886-5cc8-4859-9771-d155caf8682c" containerName="storage" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.437791 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d85e886-5cc8-4859-9771-d155caf8682c" containerName="storage" Feb 24 10:12:35 crc kubenswrapper[4755]: E0224 10:12:35.437818 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b6724a-e48a-4af3-9ea2-e34d46561dd4" containerName="registry-server" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.437831 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b6724a-e48a-4af3-9ea2-e34d46561dd4" containerName="registry-server" Feb 24 10:12:35 crc kubenswrapper[4755]: E0224 10:12:35.437855 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b6724a-e48a-4af3-9ea2-e34d46561dd4" containerName="extract-utilities" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.437869 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b6724a-e48a-4af3-9ea2-e34d46561dd4" containerName="extract-utilities" Feb 24 10:12:35 crc kubenswrapper[4755]: E0224 10:12:35.437892 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b6724a-e48a-4af3-9ea2-e34d46561dd4" containerName="extract-content" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.437907 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b6724a-e48a-4af3-9ea2-e34d46561dd4" containerName="extract-content" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.438126 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d85e886-5cc8-4859-9771-d155caf8682c" containerName="storage" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.438145 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="64b6724a-e48a-4af3-9ea2-e34d46561dd4" containerName="registry-server" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.439573 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.441894 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.457760 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c"] Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.487805 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5af41058-f556-416b-ae03-ad3e98c6c49a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c\" (UID: \"5af41058-f556-416b-ae03-ad3e98c6c49a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.487851 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcmh7\" (UniqueName: \"kubernetes.io/projected/5af41058-f556-416b-ae03-ad3e98c6c49a-kube-api-access-zcmh7\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c\" (UID: \"5af41058-f556-416b-ae03-ad3e98c6c49a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.487919 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5af41058-f556-416b-ae03-ad3e98c6c49a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c\" (UID: \"5af41058-f556-416b-ae03-ad3e98c6c49a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.552503 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqkqm"] Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.588998 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5af41058-f556-416b-ae03-ad3e98c6c49a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c\" (UID: \"5af41058-f556-416b-ae03-ad3e98c6c49a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.589038 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcmh7\" (UniqueName: \"kubernetes.io/projected/5af41058-f556-416b-ae03-ad3e98c6c49a-kube-api-access-zcmh7\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c\" (UID: \"5af41058-f556-416b-ae03-ad3e98c6c49a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.589100 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5af41058-f556-416b-ae03-ad3e98c6c49a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c\" (UID: \"5af41058-f556-416b-ae03-ad3e98c6c49a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.589547 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5af41058-f556-416b-ae03-ad3e98c6c49a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c\" (UID: \"5af41058-f556-416b-ae03-ad3e98c6c49a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.589629 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5af41058-f556-416b-ae03-ad3e98c6c49a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c\" (UID: \"5af41058-f556-416b-ae03-ad3e98c6c49a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.611215 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcmh7\" (UniqueName: \"kubernetes.io/projected/5af41058-f556-416b-ae03-ad3e98c6c49a-kube-api-access-zcmh7\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c\" (UID: \"5af41058-f556-416b-ae03-ad3e98c6c49a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.757515 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" Feb 24 10:12:35 crc kubenswrapper[4755]: I0224 10:12:35.998034 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xqkqm" podUID="991ae998-9883-47da-bf65-d4439a482124" containerName="registry-server" containerID="cri-o://cd3d2334bccea4a5d7f5655a3e8eb96712ff78b460875a8b3048dab909a8b1ca" gracePeriod=2 Feb 24 10:12:36 crc kubenswrapper[4755]: I0224 10:12:36.047687 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c"] Feb 24 10:12:36 crc kubenswrapper[4755]: I0224 10:12:36.334215 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:36 crc kubenswrapper[4755]: I0224 10:12:36.399176 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991ae998-9883-47da-bf65-d4439a482124-utilities\") pod \"991ae998-9883-47da-bf65-d4439a482124\" (UID: \"991ae998-9883-47da-bf65-d4439a482124\") " Feb 24 10:12:36 crc kubenswrapper[4755]: I0224 10:12:36.399242 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppvfn\" (UniqueName: \"kubernetes.io/projected/991ae998-9883-47da-bf65-d4439a482124-kube-api-access-ppvfn\") pod \"991ae998-9883-47da-bf65-d4439a482124\" (UID: \"991ae998-9883-47da-bf65-d4439a482124\") " Feb 24 10:12:36 crc kubenswrapper[4755]: I0224 10:12:36.399288 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991ae998-9883-47da-bf65-d4439a482124-catalog-content\") pod \"991ae998-9883-47da-bf65-d4439a482124\" (UID: \"991ae998-9883-47da-bf65-d4439a482124\") " Feb 24 10:12:36 crc kubenswrapper[4755]: I0224 10:12:36.400705 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/991ae998-9883-47da-bf65-d4439a482124-utilities" (OuterVolumeSpecName: "utilities") pod "991ae998-9883-47da-bf65-d4439a482124" (UID: "991ae998-9883-47da-bf65-d4439a482124"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:12:36 crc kubenswrapper[4755]: I0224 10:12:36.405727 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/991ae998-9883-47da-bf65-d4439a482124-kube-api-access-ppvfn" (OuterVolumeSpecName: "kube-api-access-ppvfn") pod "991ae998-9883-47da-bf65-d4439a482124" (UID: "991ae998-9883-47da-bf65-d4439a482124"). InnerVolumeSpecName "kube-api-access-ppvfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:12:36 crc kubenswrapper[4755]: I0224 10:12:36.500514 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991ae998-9883-47da-bf65-d4439a482124-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:36 crc kubenswrapper[4755]: I0224 10:12:36.500554 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppvfn\" (UniqueName: \"kubernetes.io/projected/991ae998-9883-47da-bf65-d4439a482124-kube-api-access-ppvfn\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:36 crc kubenswrapper[4755]: I0224 10:12:36.584292 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/991ae998-9883-47da-bf65-d4439a482124-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "991ae998-9883-47da-bf65-d4439a482124" (UID: "991ae998-9883-47da-bf65-d4439a482124"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:12:36 crc kubenswrapper[4755]: I0224 10:12:36.601650 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991ae998-9883-47da-bf65-d4439a482124-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.007626 4755 generic.go:334] "Generic (PLEG): container finished" podID="5af41058-f556-416b-ae03-ad3e98c6c49a" containerID="c63293c08a0b709233d4322919e1051ae4b44e15aee8a3fbb433ab937cf126b9" exitCode=0 Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.007775 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" event={"ID":"5af41058-f556-416b-ae03-ad3e98c6c49a","Type":"ContainerDied","Data":"c63293c08a0b709233d4322919e1051ae4b44e15aee8a3fbb433ab937cf126b9"} Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.007843 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" event={"ID":"5af41058-f556-416b-ae03-ad3e98c6c49a","Type":"ContainerStarted","Data":"d8a0e1c13c22a084b4ea800a9eedc1ca6c11536c2b851092d79ec5902f279d6b"} Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.012809 4755 generic.go:334] "Generic (PLEG): container finished" podID="991ae998-9883-47da-bf65-d4439a482124" containerID="cd3d2334bccea4a5d7f5655a3e8eb96712ff78b460875a8b3048dab909a8b1ca" exitCode=0 Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.012972 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqkqm" event={"ID":"991ae998-9883-47da-bf65-d4439a482124","Type":"ContainerDied","Data":"cd3d2334bccea4a5d7f5655a3e8eb96712ff78b460875a8b3048dab909a8b1ca"} Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.013024 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqkqm" event={"ID":"991ae998-9883-47da-bf65-d4439a482124","Type":"ContainerDied","Data":"919b8adc4ee75c30de409453755cc99236b13eea32fbc4120618ffcb085de9c5"} Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.013055 4755 scope.go:117] "RemoveContainer" containerID="cd3d2334bccea4a5d7f5655a3e8eb96712ff78b460875a8b3048dab909a8b1ca" Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.013294 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqkqm" Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.048263 4755 scope.go:117] "RemoveContainer" containerID="087c3d8bc79741a992af02c901cd7c46a6757a4b99d5ae1820edf6c89d280643" Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.063431 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqkqm"] Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.070440 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xqkqm"] Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.088688 4755 scope.go:117] "RemoveContainer" containerID="f2e851ffd67b2103261b055ad10f65bd972fdc5b99f348e688d2dae315c416c0" Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.112880 4755 scope.go:117] "RemoveContainer" containerID="cd3d2334bccea4a5d7f5655a3e8eb96712ff78b460875a8b3048dab909a8b1ca" Feb 24 10:12:37 crc kubenswrapper[4755]: E0224 10:12:37.113329 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd3d2334bccea4a5d7f5655a3e8eb96712ff78b460875a8b3048dab909a8b1ca\": container with ID starting with cd3d2334bccea4a5d7f5655a3e8eb96712ff78b460875a8b3048dab909a8b1ca not found: ID does not exist" containerID="cd3d2334bccea4a5d7f5655a3e8eb96712ff78b460875a8b3048dab909a8b1ca" Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.113393 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd3d2334bccea4a5d7f5655a3e8eb96712ff78b460875a8b3048dab909a8b1ca"} err="failed to get container status \"cd3d2334bccea4a5d7f5655a3e8eb96712ff78b460875a8b3048dab909a8b1ca\": rpc error: code = NotFound desc = could not find container \"cd3d2334bccea4a5d7f5655a3e8eb96712ff78b460875a8b3048dab909a8b1ca\": container with ID starting with cd3d2334bccea4a5d7f5655a3e8eb96712ff78b460875a8b3048dab909a8b1ca not found: ID does not exist" Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.113422 4755 scope.go:117] "RemoveContainer" containerID="087c3d8bc79741a992af02c901cd7c46a6757a4b99d5ae1820edf6c89d280643" Feb 24 10:12:37 crc kubenswrapper[4755]: E0224 10:12:37.113843 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087c3d8bc79741a992af02c901cd7c46a6757a4b99d5ae1820edf6c89d280643\": container with ID starting with 087c3d8bc79741a992af02c901cd7c46a6757a4b99d5ae1820edf6c89d280643 not found: ID does not exist" containerID="087c3d8bc79741a992af02c901cd7c46a6757a4b99d5ae1820edf6c89d280643" Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.113903 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087c3d8bc79741a992af02c901cd7c46a6757a4b99d5ae1820edf6c89d280643"} err="failed to get container status \"087c3d8bc79741a992af02c901cd7c46a6757a4b99d5ae1820edf6c89d280643\": rpc error: code = NotFound desc = could not find container \"087c3d8bc79741a992af02c901cd7c46a6757a4b99d5ae1820edf6c89d280643\": container with ID starting with 087c3d8bc79741a992af02c901cd7c46a6757a4b99d5ae1820edf6c89d280643 not found: ID does not exist" Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.113922 4755 scope.go:117] "RemoveContainer" containerID="f2e851ffd67b2103261b055ad10f65bd972fdc5b99f348e688d2dae315c416c0" Feb 24 10:12:37 crc kubenswrapper[4755]: E0224 10:12:37.114193 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2e851ffd67b2103261b055ad10f65bd972fdc5b99f348e688d2dae315c416c0\": container with ID starting with f2e851ffd67b2103261b055ad10f65bd972fdc5b99f348e688d2dae315c416c0 not found: ID does not exist" containerID="f2e851ffd67b2103261b055ad10f65bd972fdc5b99f348e688d2dae315c416c0" Feb 24 10:12:37 crc kubenswrapper[4755]: I0224 10:12:37.114224 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e851ffd67b2103261b055ad10f65bd972fdc5b99f348e688d2dae315c416c0"} err="failed to get container status \"f2e851ffd67b2103261b055ad10f65bd972fdc5b99f348e688d2dae315c416c0\": rpc error: code = NotFound desc = could not find container \"f2e851ffd67b2103261b055ad10f65bd972fdc5b99f348e688d2dae315c416c0\": container with ID starting with f2e851ffd67b2103261b055ad10f65bd972fdc5b99f348e688d2dae315c416c0 not found: ID does not exist" Feb 24 10:12:38 crc kubenswrapper[4755]: I0224 10:12:38.327530 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="991ae998-9883-47da-bf65-d4439a482124" path="/var/lib/kubelet/pods/991ae998-9883-47da-bf65-d4439a482124/volumes" Feb 24 10:12:38 crc kubenswrapper[4755]: I0224 10:12:38.427552 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:38 crc kubenswrapper[4755]: I0224 10:12:38.428133 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:38 crc kubenswrapper[4755]: I0224 10:12:38.495020 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:39 crc kubenswrapper[4755]: I0224 10:12:39.031911 4755 generic.go:334] "Generic (PLEG): container finished" podID="5af41058-f556-416b-ae03-ad3e98c6c49a" containerID="96a0c2616e8ad27e07338530dc70c7ffa3d1b9ca2a87f58849ad96b192b8fdeb" exitCode=0 Feb 24 10:12:39 crc kubenswrapper[4755]: I0224 10:12:39.031973 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" event={"ID":"5af41058-f556-416b-ae03-ad3e98c6c49a","Type":"ContainerDied","Data":"96a0c2616e8ad27e07338530dc70c7ffa3d1b9ca2a87f58849ad96b192b8fdeb"} Feb 24 10:12:39 crc kubenswrapper[4755]: I0224 10:12:39.087677 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:40 crc kubenswrapper[4755]: I0224 10:12:40.043371 4755 generic.go:334] "Generic (PLEG): container finished" podID="5af41058-f556-416b-ae03-ad3e98c6c49a" containerID="0b58327882005a7b331f0e5311ae14f129dc16f42c7ea75187ba04609b61e7e3" exitCode=0 Feb 24 10:12:40 crc kubenswrapper[4755]: I0224 10:12:40.043452 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" event={"ID":"5af41058-f556-416b-ae03-ad3e98c6c49a","Type":"ContainerDied","Data":"0b58327882005a7b331f0e5311ae14f129dc16f42c7ea75187ba04609b61e7e3"} Feb 24 10:12:41 crc kubenswrapper[4755]: I0224 10:12:41.361284 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prsxg"] Feb 24 10:12:41 crc kubenswrapper[4755]: I0224 10:12:41.380889 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" Feb 24 10:12:41 crc kubenswrapper[4755]: I0224 10:12:41.469291 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5af41058-f556-416b-ae03-ad3e98c6c49a-util\") pod \"5af41058-f556-416b-ae03-ad3e98c6c49a\" (UID: \"5af41058-f556-416b-ae03-ad3e98c6c49a\") " Feb 24 10:12:41 crc kubenswrapper[4755]: I0224 10:12:41.470032 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5af41058-f556-416b-ae03-ad3e98c6c49a-bundle\") pod \"5af41058-f556-416b-ae03-ad3e98c6c49a\" (UID: \"5af41058-f556-416b-ae03-ad3e98c6c49a\") " Feb 24 10:12:41 crc kubenswrapper[4755]: I0224 10:12:41.470114 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcmh7\" (UniqueName: \"kubernetes.io/projected/5af41058-f556-416b-ae03-ad3e98c6c49a-kube-api-access-zcmh7\") pod \"5af41058-f556-416b-ae03-ad3e98c6c49a\" (UID: \"5af41058-f556-416b-ae03-ad3e98c6c49a\") " Feb 24 10:12:41 crc kubenswrapper[4755]: I0224 10:12:41.470959 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5af41058-f556-416b-ae03-ad3e98c6c49a-bundle" (OuterVolumeSpecName: "bundle") pod "5af41058-f556-416b-ae03-ad3e98c6c49a" (UID: "5af41058-f556-416b-ae03-ad3e98c6c49a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:12:41 crc kubenswrapper[4755]: I0224 10:12:41.476883 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af41058-f556-416b-ae03-ad3e98c6c49a-kube-api-access-zcmh7" (OuterVolumeSpecName: "kube-api-access-zcmh7") pod "5af41058-f556-416b-ae03-ad3e98c6c49a" (UID: "5af41058-f556-416b-ae03-ad3e98c6c49a"). InnerVolumeSpecName "kube-api-access-zcmh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:12:41 crc kubenswrapper[4755]: I0224 10:12:41.571704 4755 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5af41058-f556-416b-ae03-ad3e98c6c49a-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:41 crc kubenswrapper[4755]: I0224 10:12:41.571731 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcmh7\" (UniqueName: \"kubernetes.io/projected/5af41058-f556-416b-ae03-ad3e98c6c49a-kube-api-access-zcmh7\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:41 crc kubenswrapper[4755]: I0224 10:12:41.741186 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5af41058-f556-416b-ae03-ad3e98c6c49a-util" (OuterVolumeSpecName: "util") pod "5af41058-f556-416b-ae03-ad3e98c6c49a" (UID: "5af41058-f556-416b-ae03-ad3e98c6c49a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:12:41 crc kubenswrapper[4755]: I0224 10:12:41.774167 4755 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5af41058-f556-416b-ae03-ad3e98c6c49a-util\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:42 crc kubenswrapper[4755]: I0224 10:12:42.062326 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" event={"ID":"5af41058-f556-416b-ae03-ad3e98c6c49a","Type":"ContainerDied","Data":"d8a0e1c13c22a084b4ea800a9eedc1ca6c11536c2b851092d79ec5902f279d6b"} Feb 24 10:12:42 crc kubenswrapper[4755]: I0224 10:12:42.062364 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecahvs5c" Feb 24 10:12:42 crc kubenswrapper[4755]: I0224 10:12:42.062392 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8a0e1c13c22a084b4ea800a9eedc1ca6c11536c2b851092d79ec5902f279d6b" Feb 24 10:12:42 crc kubenswrapper[4755]: I0224 10:12:42.062535 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-prsxg" podUID="864d4cf3-3c0d-4956-843d-5fa2f0fe057a" containerName="registry-server" containerID="cri-o://48f79a137f8d0acc84398c83457861a2b760002943156d76958a423db76cff3c" gracePeriod=2 Feb 24 10:12:42 crc kubenswrapper[4755]: I0224 10:12:42.479578 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:42 crc kubenswrapper[4755]: I0224 10:12:42.585693 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48cqw\" (UniqueName: \"kubernetes.io/projected/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-kube-api-access-48cqw\") pod \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\" (UID: \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\") " Feb 24 10:12:42 crc kubenswrapper[4755]: I0224 10:12:42.585773 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-utilities\") pod \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\" (UID: \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\") " Feb 24 10:12:42 crc kubenswrapper[4755]: I0224 10:12:42.585790 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-catalog-content\") pod \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\" (UID: \"864d4cf3-3c0d-4956-843d-5fa2f0fe057a\") " Feb 24 10:12:42 crc kubenswrapper[4755]: I0224 10:12:42.587653 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-utilities" (OuterVolumeSpecName: "utilities") pod "864d4cf3-3c0d-4956-843d-5fa2f0fe057a" (UID: "864d4cf3-3c0d-4956-843d-5fa2f0fe057a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:12:42 crc kubenswrapper[4755]: I0224 10:12:42.594182 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-kube-api-access-48cqw" (OuterVolumeSpecName: "kube-api-access-48cqw") pod "864d4cf3-3c0d-4956-843d-5fa2f0fe057a" (UID: "864d4cf3-3c0d-4956-843d-5fa2f0fe057a"). InnerVolumeSpecName "kube-api-access-48cqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:12:42 crc kubenswrapper[4755]: I0224 10:12:42.669463 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "864d4cf3-3c0d-4956-843d-5fa2f0fe057a" (UID: "864d4cf3-3c0d-4956-843d-5fa2f0fe057a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:12:42 crc kubenswrapper[4755]: I0224 10:12:42.687250 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:42 crc kubenswrapper[4755]: I0224 10:12:42.687291 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:42 crc kubenswrapper[4755]: I0224 10:12:42.687306 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48cqw\" (UniqueName: \"kubernetes.io/projected/864d4cf3-3c0d-4956-843d-5fa2f0fe057a-kube-api-access-48cqw\") on node \"crc\" DevicePath \"\"" Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.073102 4755 generic.go:334] "Generic (PLEG): container finished" podID="864d4cf3-3c0d-4956-843d-5fa2f0fe057a" containerID="48f79a137f8d0acc84398c83457861a2b760002943156d76958a423db76cff3c" exitCode=0 Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.074041 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prsxg" Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.073211 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prsxg" event={"ID":"864d4cf3-3c0d-4956-843d-5fa2f0fe057a","Type":"ContainerDied","Data":"48f79a137f8d0acc84398c83457861a2b760002943156d76958a423db76cff3c"} Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.077455 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prsxg" event={"ID":"864d4cf3-3c0d-4956-843d-5fa2f0fe057a","Type":"ContainerDied","Data":"6d311a774692c095a2cb04c66e8e4b1a30fb08e77e456443e746ce39c14e41c4"} Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.077624 4755 scope.go:117] "RemoveContainer" containerID="48f79a137f8d0acc84398c83457861a2b760002943156d76958a423db76cff3c" Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.110718 4755 scope.go:117] "RemoveContainer" containerID="314a578f42393fc2e3e05698990115e60e3bb3c4913541f1ad246b2e5b2b7a62" Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.133656 4755 scope.go:117] "RemoveContainer" containerID="2bf80d0c75f8db63a365f73f9e15288e17a08ee1d3a5243c80d0207b6cea77dd" Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.166009 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prsxg"] Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.168793 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-prsxg"] Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.186365 4755 scope.go:117] "RemoveContainer" containerID="48f79a137f8d0acc84398c83457861a2b760002943156d76958a423db76cff3c" Feb 24 10:12:43 crc kubenswrapper[4755]: E0224 10:12:43.186907 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f79a137f8d0acc84398c83457861a2b760002943156d76958a423db76cff3c\": container with ID starting with 48f79a137f8d0acc84398c83457861a2b760002943156d76958a423db76cff3c not found: ID does not exist" containerID="48f79a137f8d0acc84398c83457861a2b760002943156d76958a423db76cff3c" Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.186962 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f79a137f8d0acc84398c83457861a2b760002943156d76958a423db76cff3c"} err="failed to get container status \"48f79a137f8d0acc84398c83457861a2b760002943156d76958a423db76cff3c\": rpc error: code = NotFound desc = could not find container \"48f79a137f8d0acc84398c83457861a2b760002943156d76958a423db76cff3c\": container with ID starting with 48f79a137f8d0acc84398c83457861a2b760002943156d76958a423db76cff3c not found: ID does not exist" Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.187003 4755 scope.go:117] "RemoveContainer" containerID="314a578f42393fc2e3e05698990115e60e3bb3c4913541f1ad246b2e5b2b7a62" Feb 24 10:12:43 crc kubenswrapper[4755]: E0224 10:12:43.187393 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"314a578f42393fc2e3e05698990115e60e3bb3c4913541f1ad246b2e5b2b7a62\": container with ID starting with 314a578f42393fc2e3e05698990115e60e3bb3c4913541f1ad246b2e5b2b7a62 not found: ID does not exist" containerID="314a578f42393fc2e3e05698990115e60e3bb3c4913541f1ad246b2e5b2b7a62" Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.187459 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"314a578f42393fc2e3e05698990115e60e3bb3c4913541f1ad246b2e5b2b7a62"} err="failed to get container status \"314a578f42393fc2e3e05698990115e60e3bb3c4913541f1ad246b2e5b2b7a62\": rpc error: code = NotFound desc = could not find container \"314a578f42393fc2e3e05698990115e60e3bb3c4913541f1ad246b2e5b2b7a62\": container with ID starting with 314a578f42393fc2e3e05698990115e60e3bb3c4913541f1ad246b2e5b2b7a62 not found: ID does not exist" Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.187504 4755 scope.go:117] "RemoveContainer" containerID="2bf80d0c75f8db63a365f73f9e15288e17a08ee1d3a5243c80d0207b6cea77dd" Feb 24 10:12:43 crc kubenswrapper[4755]: E0224 10:12:43.187954 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf80d0c75f8db63a365f73f9e15288e17a08ee1d3a5243c80d0207b6cea77dd\": container with ID starting with 2bf80d0c75f8db63a365f73f9e15288e17a08ee1d3a5243c80d0207b6cea77dd not found: ID does not exist" containerID="2bf80d0c75f8db63a365f73f9e15288e17a08ee1d3a5243c80d0207b6cea77dd" Feb 24 10:12:43 crc kubenswrapper[4755]: I0224 10:12:43.187993 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf80d0c75f8db63a365f73f9e15288e17a08ee1d3a5243c80d0207b6cea77dd"} err="failed to get container status \"2bf80d0c75f8db63a365f73f9e15288e17a08ee1d3a5243c80d0207b6cea77dd\": rpc error: code = NotFound desc = could not find container \"2bf80d0c75f8db63a365f73f9e15288e17a08ee1d3a5243c80d0207b6cea77dd\": container with ID starting with 2bf80d0c75f8db63a365f73f9e15288e17a08ee1d3a5243c80d0207b6cea77dd not found: ID does not exist" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.090059 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-ww89q"] Feb 24 10:12:44 crc kubenswrapper[4755]: E0224 10:12:44.090259 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991ae998-9883-47da-bf65-d4439a482124" containerName="extract-utilities" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.090271 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="991ae998-9883-47da-bf65-d4439a482124" containerName="extract-utilities" Feb 24 10:12:44 crc kubenswrapper[4755]: E0224 10:12:44.090283 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864d4cf3-3c0d-4956-843d-5fa2f0fe057a" containerName="registry-server" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.090290 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="864d4cf3-3c0d-4956-843d-5fa2f0fe057a" containerName="registry-server" Feb 24 10:12:44 crc kubenswrapper[4755]: E0224 10:12:44.090299 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af41058-f556-416b-ae03-ad3e98c6c49a" containerName="pull" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.090305 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af41058-f556-416b-ae03-ad3e98c6c49a" containerName="pull" Feb 24 10:12:44 crc kubenswrapper[4755]: E0224 10:12:44.090315 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af41058-f556-416b-ae03-ad3e98c6c49a" containerName="util" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.090320 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af41058-f556-416b-ae03-ad3e98c6c49a" containerName="util" Feb 24 10:12:44 crc kubenswrapper[4755]: E0224 10:12:44.090330 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864d4cf3-3c0d-4956-843d-5fa2f0fe057a" containerName="extract-utilities" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.090335 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="864d4cf3-3c0d-4956-843d-5fa2f0fe057a" containerName="extract-utilities" Feb 24 10:12:44 crc kubenswrapper[4755]: E0224 10:12:44.090343 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864d4cf3-3c0d-4956-843d-5fa2f0fe057a" containerName="extract-content" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.090349 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="864d4cf3-3c0d-4956-843d-5fa2f0fe057a" containerName="extract-content" Feb 24 10:12:44 crc kubenswrapper[4755]: E0224 10:12:44.090358 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af41058-f556-416b-ae03-ad3e98c6c49a" containerName="extract" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.090363 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af41058-f556-416b-ae03-ad3e98c6c49a" containerName="extract" Feb 24 10:12:44 crc kubenswrapper[4755]: E0224 10:12:44.090373 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991ae998-9883-47da-bf65-d4439a482124" containerName="registry-server" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.090379 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="991ae998-9883-47da-bf65-d4439a482124" containerName="registry-server" Feb 24 10:12:44 crc kubenswrapper[4755]: E0224 10:12:44.090385 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991ae998-9883-47da-bf65-d4439a482124" containerName="extract-content" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.090390 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="991ae998-9883-47da-bf65-d4439a482124" containerName="extract-content" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.090480 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="991ae998-9883-47da-bf65-d4439a482124" containerName="registry-server" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.090491 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af41058-f556-416b-ae03-ad3e98c6c49a" containerName="extract" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.090502 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="864d4cf3-3c0d-4956-843d-5fa2f0fe057a" containerName="registry-server" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.090855 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-ww89q" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.093775 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.094027 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.095374 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-qfkzp" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.105084 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh5w8\" (UniqueName: \"kubernetes.io/projected/07318fd7-5c32-49a0-95f9-0a7b82a2b6e4-kube-api-access-wh5w8\") pod \"nmstate-operator-694c9596b7-ww89q\" (UID: \"07318fd7-5c32-49a0-95f9-0a7b82a2b6e4\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-ww89q" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.106882 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-ww89q"] Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.206175 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh5w8\" (UniqueName: \"kubernetes.io/projected/07318fd7-5c32-49a0-95f9-0a7b82a2b6e4-kube-api-access-wh5w8\") pod \"nmstate-operator-694c9596b7-ww89q\" (UID: \"07318fd7-5c32-49a0-95f9-0a7b82a2b6e4\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-ww89q" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.229328 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh5w8\" (UniqueName: \"kubernetes.io/projected/07318fd7-5c32-49a0-95f9-0a7b82a2b6e4-kube-api-access-wh5w8\") pod \"nmstate-operator-694c9596b7-ww89q\" (UID: \"07318fd7-5c32-49a0-95f9-0a7b82a2b6e4\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-ww89q" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.326287 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="864d4cf3-3c0d-4956-843d-5fa2f0fe057a" path="/var/lib/kubelet/pods/864d4cf3-3c0d-4956-843d-5fa2f0fe057a/volumes" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.406217 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-ww89q" Feb 24 10:12:44 crc kubenswrapper[4755]: I0224 10:12:44.683375 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-ww89q"] Feb 24 10:12:45 crc kubenswrapper[4755]: I0224 10:12:45.087017 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-ww89q" event={"ID":"07318fd7-5c32-49a0-95f9-0a7b82a2b6e4","Type":"ContainerStarted","Data":"8ff94a562b48abcd80995d3376c14c0a0ab9f38c4bbce6939601d91c00b39775"} Feb 24 10:12:47 crc kubenswrapper[4755]: I0224 10:12:47.101468 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-ww89q" event={"ID":"07318fd7-5c32-49a0-95f9-0a7b82a2b6e4","Type":"ContainerStarted","Data":"2495a00577efa82f29f6b315318864ae68d70c46c1867469a15ac149d92dfd54"} Feb 24 10:12:47 crc kubenswrapper[4755]: I0224 10:12:47.126734 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-ww89q" podStartSLOduration=1.109629284 podStartE2EDuration="3.12668855s" podCreationTimestamp="2026-02-24 10:12:44 +0000 UTC" firstStartedPulling="2026-02-24 10:12:44.697533402 +0000 UTC m=+1069.154055945" lastFinishedPulling="2026-02-24 10:12:46.714592658 +0000 UTC m=+1071.171115211" observedRunningTime="2026-02-24 10:12:47.122458028 +0000 UTC m=+1071.578980601" watchObservedRunningTime="2026-02-24 10:12:47.12668855 +0000 UTC m=+1071.583211103" Feb 24 10:12:51 crc kubenswrapper[4755]: I0224 10:12:51.694673 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:12:51 crc kubenswrapper[4755]: I0224 10:12:51.695344 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:12:51 crc kubenswrapper[4755]: I0224 10:12:51.695404 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 10:12:51 crc kubenswrapper[4755]: I0224 10:12:51.696157 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a16cb28625dc72750f4129b2fe696ae5e7f59a186c5c1bd832753d622e238c1f"} pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 10:12:51 crc kubenswrapper[4755]: I0224 10:12:51.696256 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" containerID="cri-o://a16cb28625dc72750f4129b2fe696ae5e7f59a186c5c1bd832753d622e238c1f" gracePeriod=600 Feb 24 10:12:52 crc kubenswrapper[4755]: I0224 10:12:52.137735 4755 generic.go:334] "Generic (PLEG): container finished" podID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerID="a16cb28625dc72750f4129b2fe696ae5e7f59a186c5c1bd832753d622e238c1f" exitCode=0 Feb 24 10:12:52 crc kubenswrapper[4755]: I0224 10:12:52.137822 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerDied","Data":"a16cb28625dc72750f4129b2fe696ae5e7f59a186c5c1bd832753d622e238c1f"} Feb 24 10:12:52 crc kubenswrapper[4755]: I0224 10:12:52.138493 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"805e6e8826e15b1db15a276c8f3343a64e680fde18436416c1dae4ce97e5fa1f"} Feb 24 10:12:52 crc kubenswrapper[4755]: I0224 10:12:52.138594 4755 scope.go:117] "RemoveContainer" containerID="4c3d2ec1f41e0bfd106b66de040fc84e954e7115f3726ec67eeb0571aad4eeac" Feb 24 10:12:54 crc kubenswrapper[4755]: I0224 10:12:54.984536 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-clbc6"] Feb 24 10:12:54 crc kubenswrapper[4755]: I0224 10:12:54.985813 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-clbc6" Feb 24 10:12:54 crc kubenswrapper[4755]: I0224 10:12:54.988268 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-bpkdz" Feb 24 10:12:54 crc kubenswrapper[4755]: I0224 10:12:54.994352 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d"] Feb 24 10:12:54 crc kubenswrapper[4755]: I0224 10:12:54.995427 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d" Feb 24 10:12:54 crc kubenswrapper[4755]: I0224 10:12:54.997061 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 24 10:12:54 crc kubenswrapper[4755]: I0224 10:12:54.998688 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-clbc6"] Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.010025 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-z46dq"] Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.010672 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.043925 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4014d566-c8c7-41e2-bb30-a68eb7010700-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-ldl9d\" (UID: \"4014d566-c8c7-41e2-bb30-a68eb7010700\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.043992 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e599031c-4dc6-4f48-bd1b-f9ace444164b-ovs-socket\") pod \"nmstate-handler-z46dq\" (UID: \"e599031c-4dc6-4f48-bd1b-f9ace444164b\") " pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.044034 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rtsh\" (UniqueName: \"kubernetes.io/projected/e599031c-4dc6-4f48-bd1b-f9ace444164b-kube-api-access-5rtsh\") pod \"nmstate-handler-z46dq\" (UID: \"e599031c-4dc6-4f48-bd1b-f9ace444164b\") " pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.044055 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e599031c-4dc6-4f48-bd1b-f9ace444164b-nmstate-lock\") pod \"nmstate-handler-z46dq\" (UID: \"e599031c-4dc6-4f48-bd1b-f9ace444164b\") " pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.044190 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e599031c-4dc6-4f48-bd1b-f9ace444164b-dbus-socket\") pod \"nmstate-handler-z46dq\" (UID: \"e599031c-4dc6-4f48-bd1b-f9ace444164b\") " pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.044263 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp5nq\" (UniqueName: \"kubernetes.io/projected/4014d566-c8c7-41e2-bb30-a68eb7010700-kube-api-access-fp5nq\") pod \"nmstate-webhook-866bcb46dc-ldl9d\" (UID: \"4014d566-c8c7-41e2-bb30-a68eb7010700\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.044289 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzjs\" (UniqueName: \"kubernetes.io/projected/48004dcc-7cc0-427c-9ec1-4770cc4712ce-kube-api-access-zvzjs\") pod \"nmstate-metrics-58c85c668d-clbc6\" (UID: \"48004dcc-7cc0-427c-9ec1-4770cc4712ce\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-clbc6" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.053465 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d"] Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.100424 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw"] Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.101028 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.102868 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.103797 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.104465 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-5mh8n" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.117267 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw"] Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.145104 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rtsh\" (UniqueName: \"kubernetes.io/projected/e599031c-4dc6-4f48-bd1b-f9ace444164b-kube-api-access-5rtsh\") pod \"nmstate-handler-z46dq\" (UID: \"e599031c-4dc6-4f48-bd1b-f9ace444164b\") " pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.145163 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e599031c-4dc6-4f48-bd1b-f9ace444164b-nmstate-lock\") pod \"nmstate-handler-z46dq\" (UID: \"e599031c-4dc6-4f48-bd1b-f9ace444164b\") " pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.145201 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e599031c-4dc6-4f48-bd1b-f9ace444164b-dbus-socket\") pod \"nmstate-handler-z46dq\" (UID: \"e599031c-4dc6-4f48-bd1b-f9ace444164b\") " pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.145254 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp5nq\" (UniqueName: \"kubernetes.io/projected/4014d566-c8c7-41e2-bb30-a68eb7010700-kube-api-access-fp5nq\") pod \"nmstate-webhook-866bcb46dc-ldl9d\" (UID: \"4014d566-c8c7-41e2-bb30-a68eb7010700\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.145276 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzjs\" (UniqueName: \"kubernetes.io/projected/48004dcc-7cc0-427c-9ec1-4770cc4712ce-kube-api-access-zvzjs\") pod \"nmstate-metrics-58c85c668d-clbc6\" (UID: \"48004dcc-7cc0-427c-9ec1-4770cc4712ce\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-clbc6" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.145283 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e599031c-4dc6-4f48-bd1b-f9ace444164b-nmstate-lock\") pod \"nmstate-handler-z46dq\" (UID: \"e599031c-4dc6-4f48-bd1b-f9ace444164b\") " pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.145315 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4014d566-c8c7-41e2-bb30-a68eb7010700-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-ldl9d\" (UID: \"4014d566-c8c7-41e2-bb30-a68eb7010700\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.145344 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e599031c-4dc6-4f48-bd1b-f9ace444164b-ovs-socket\") pod \"nmstate-handler-z46dq\" (UID: \"e599031c-4dc6-4f48-bd1b-f9ace444164b\") " pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.145425 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e599031c-4dc6-4f48-bd1b-f9ace444164b-ovs-socket\") pod \"nmstate-handler-z46dq\" (UID: \"e599031c-4dc6-4f48-bd1b-f9ace444164b\") " pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:55 crc kubenswrapper[4755]: E0224 10:12:55.145494 4755 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 24 10:12:55 crc kubenswrapper[4755]: E0224 10:12:55.145551 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4014d566-c8c7-41e2-bb30-a68eb7010700-tls-key-pair podName:4014d566-c8c7-41e2-bb30-a68eb7010700 nodeName:}" failed. No retries permitted until 2026-02-24 10:12:55.645533311 +0000 UTC m=+1080.102055864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/4014d566-c8c7-41e2-bb30-a68eb7010700-tls-key-pair") pod "nmstate-webhook-866bcb46dc-ldl9d" (UID: "4014d566-c8c7-41e2-bb30-a68eb7010700") : secret "openshift-nmstate-webhook" not found Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.145546 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e599031c-4dc6-4f48-bd1b-f9ace444164b-dbus-socket\") pod \"nmstate-handler-z46dq\" (UID: \"e599031c-4dc6-4f48-bd1b-f9ace444164b\") " pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.163820 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp5nq\" (UniqueName: \"kubernetes.io/projected/4014d566-c8c7-41e2-bb30-a68eb7010700-kube-api-access-fp5nq\") pod \"nmstate-webhook-866bcb46dc-ldl9d\" (UID: \"4014d566-c8c7-41e2-bb30-a68eb7010700\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.168380 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzjs\" (UniqueName: \"kubernetes.io/projected/48004dcc-7cc0-427c-9ec1-4770cc4712ce-kube-api-access-zvzjs\") pod \"nmstate-metrics-58c85c668d-clbc6\" (UID: \"48004dcc-7cc0-427c-9ec1-4770cc4712ce\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-clbc6" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.168979 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rtsh\" (UniqueName: \"kubernetes.io/projected/e599031c-4dc6-4f48-bd1b-f9ace444164b-kube-api-access-5rtsh\") pod \"nmstate-handler-z46dq\" (UID: \"e599031c-4dc6-4f48-bd1b-f9ace444164b\") " pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.247008 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a50b7c22-4e8e-4313-99c0-92c358ee6427-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5glvw\" (UID: \"a50b7c22-4e8e-4313-99c0-92c358ee6427\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.247084 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z7fv\" (UniqueName: \"kubernetes.io/projected/a50b7c22-4e8e-4313-99c0-92c358ee6427-kube-api-access-5z7fv\") pod \"nmstate-console-plugin-5c78fc5d65-5glvw\" (UID: \"a50b7c22-4e8e-4313-99c0-92c358ee6427\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.247121 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a50b7c22-4e8e-4313-99c0-92c358ee6427-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5glvw\" (UID: \"a50b7c22-4e8e-4313-99c0-92c358ee6427\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.275620 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5f95567bbf-vrvbm"] Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.276710 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.286565 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f95567bbf-vrvbm"] Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.318953 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-clbc6" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.342713 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.350119 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a50b7c22-4e8e-4313-99c0-92c358ee6427-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5glvw\" (UID: \"a50b7c22-4e8e-4313-99c0-92c358ee6427\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.350169 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z7fv\" (UniqueName: \"kubernetes.io/projected/a50b7c22-4e8e-4313-99c0-92c358ee6427-kube-api-access-5z7fv\") pod \"nmstate-console-plugin-5c78fc5d65-5glvw\" (UID: \"a50b7c22-4e8e-4313-99c0-92c358ee6427\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.350206 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a50b7c22-4e8e-4313-99c0-92c358ee6427-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5glvw\" (UID: \"a50b7c22-4e8e-4313-99c0-92c358ee6427\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.352574 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a50b7c22-4e8e-4313-99c0-92c358ee6427-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5glvw\" (UID: \"a50b7c22-4e8e-4313-99c0-92c358ee6427\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.372699 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a50b7c22-4e8e-4313-99c0-92c358ee6427-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5glvw\" (UID: \"a50b7c22-4e8e-4313-99c0-92c358ee6427\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.380176 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z7fv\" (UniqueName: \"kubernetes.io/projected/a50b7c22-4e8e-4313-99c0-92c358ee6427-kube-api-access-5z7fv\") pod \"nmstate-console-plugin-5c78fc5d65-5glvw\" (UID: \"a50b7c22-4e8e-4313-99c0-92c358ee6427\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.420447 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.451757 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tqkm\" (UniqueName: \"kubernetes.io/projected/5b2a34b4-5c06-48e5-8ed8-6464ff396259-kube-api-access-6tqkm\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.452870 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b2a34b4-5c06-48e5-8ed8-6464ff396259-trusted-ca-bundle\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.452954 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b2a34b4-5c06-48e5-8ed8-6464ff396259-console-oauth-config\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.453039 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b2a34b4-5c06-48e5-8ed8-6464ff396259-oauth-serving-cert\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.453079 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b2a34b4-5c06-48e5-8ed8-6464ff396259-console-serving-cert\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.453099 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b2a34b4-5c06-48e5-8ed8-6464ff396259-service-ca\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.453175 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b2a34b4-5c06-48e5-8ed8-6464ff396259-console-config\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.553913 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b2a34b4-5c06-48e5-8ed8-6464ff396259-console-config\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.553981 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tqkm\" (UniqueName: \"kubernetes.io/projected/5b2a34b4-5c06-48e5-8ed8-6464ff396259-kube-api-access-6tqkm\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.554023 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b2a34b4-5c06-48e5-8ed8-6464ff396259-trusted-ca-bundle\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.554054 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b2a34b4-5c06-48e5-8ed8-6464ff396259-console-oauth-config\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.554110 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b2a34b4-5c06-48e5-8ed8-6464ff396259-oauth-serving-cert\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.554127 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b2a34b4-5c06-48e5-8ed8-6464ff396259-console-serving-cert\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.554139 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b2a34b4-5c06-48e5-8ed8-6464ff396259-service-ca\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.554944 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b2a34b4-5c06-48e5-8ed8-6464ff396259-service-ca\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.555089 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b2a34b4-5c06-48e5-8ed8-6464ff396259-console-config\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.555920 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b2a34b4-5c06-48e5-8ed8-6464ff396259-oauth-serving-cert\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.556159 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b2a34b4-5c06-48e5-8ed8-6464ff396259-trusted-ca-bundle\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.559447 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b2a34b4-5c06-48e5-8ed8-6464ff396259-console-serving-cert\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.566658 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b2a34b4-5c06-48e5-8ed8-6464ff396259-console-oauth-config\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.570464 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tqkm\" (UniqueName: \"kubernetes.io/projected/5b2a34b4-5c06-48e5-8ed8-6464ff396259-kube-api-access-6tqkm\") pod \"console-5f95567bbf-vrvbm\" (UID: \"5b2a34b4-5c06-48e5-8ed8-6464ff396259\") " pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.592360 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw"] Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.605369 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.655323 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4014d566-c8c7-41e2-bb30-a68eb7010700-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-ldl9d\" (UID: \"4014d566-c8c7-41e2-bb30-a68eb7010700\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.659359 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4014d566-c8c7-41e2-bb30-a68eb7010700-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-ldl9d\" (UID: \"4014d566-c8c7-41e2-bb30-a68eb7010700\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d" Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.755599 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-clbc6"] Feb 24 10:12:55 crc kubenswrapper[4755]: W0224 10:12:55.765772 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48004dcc_7cc0_427c_9ec1_4770cc4712ce.slice/crio-e87b124f28cc9616379d1de8291f5bdc7248622fff3b116636c8734d00b0cb9f WatchSource:0}: Error finding container e87b124f28cc9616379d1de8291f5bdc7248622fff3b116636c8734d00b0cb9f: Status 404 returned error can't find the container with id e87b124f28cc9616379d1de8291f5bdc7248622fff3b116636c8734d00b0cb9f Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.766488 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f95567bbf-vrvbm"] Feb 24 10:12:55 crc kubenswrapper[4755]: W0224 10:12:55.776522 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b2a34b4_5c06_48e5_8ed8_6464ff396259.slice/crio-4b9fa9eb1a97475ed4a3745652c9da458eab7769ef490651e1fcf0e25c91fc78 WatchSource:0}: Error finding container 4b9fa9eb1a97475ed4a3745652c9da458eab7769ef490651e1fcf0e25c91fc78: Status 404 returned error can't find the container with id 4b9fa9eb1a97475ed4a3745652c9da458eab7769ef490651e1fcf0e25c91fc78 Feb 24 10:12:55 crc kubenswrapper[4755]: I0224 10:12:55.933452 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d" Feb 24 10:12:56 crc kubenswrapper[4755]: I0224 10:12:56.183971 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-z46dq" event={"ID":"e599031c-4dc6-4f48-bd1b-f9ace444164b","Type":"ContainerStarted","Data":"9c3445d0c2f8698adfd3a811d51f174fb05f6e5fdde1284d61f8119f8bc7764f"} Feb 24 10:12:56 crc kubenswrapper[4755]: I0224 10:12:56.184724 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d"] Feb 24 10:12:56 crc kubenswrapper[4755]: I0224 10:12:56.186563 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw" event={"ID":"a50b7c22-4e8e-4313-99c0-92c358ee6427","Type":"ContainerStarted","Data":"989dc248111b3f0b0d981b79eaad4b3b60853ccd306e3a9b4a0f0c35f9f1d026"} Feb 24 10:12:56 crc kubenswrapper[4755]: I0224 10:12:56.188696 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f95567bbf-vrvbm" event={"ID":"5b2a34b4-5c06-48e5-8ed8-6464ff396259","Type":"ContainerStarted","Data":"dd26f1de01aa5c94b724dadad5134be18c2aa415c0c87d51aa90d3dca121afcc"} Feb 24 10:12:56 crc kubenswrapper[4755]: I0224 10:12:56.188737 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f95567bbf-vrvbm" event={"ID":"5b2a34b4-5c06-48e5-8ed8-6464ff396259","Type":"ContainerStarted","Data":"4b9fa9eb1a97475ed4a3745652c9da458eab7769ef490651e1fcf0e25c91fc78"} Feb 24 10:12:56 crc kubenswrapper[4755]: I0224 10:12:56.189805 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-clbc6" event={"ID":"48004dcc-7cc0-427c-9ec1-4770cc4712ce","Type":"ContainerStarted","Data":"e87b124f28cc9616379d1de8291f5bdc7248622fff3b116636c8734d00b0cb9f"} Feb 24 10:12:56 crc kubenswrapper[4755]: W0224 10:12:56.193670 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4014d566_c8c7_41e2_bb30_a68eb7010700.slice/crio-81cb33207362b5dc6480f1983da545b866d8d0129135fbe90e99d8862b118e4d WatchSource:0}: Error finding container 81cb33207362b5dc6480f1983da545b866d8d0129135fbe90e99d8862b118e4d: Status 404 returned error can't find the container with id 81cb33207362b5dc6480f1983da545b866d8d0129135fbe90e99d8862b118e4d Feb 24 10:12:56 crc kubenswrapper[4755]: I0224 10:12:56.217154 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5f95567bbf-vrvbm" podStartSLOduration=1.217104854 podStartE2EDuration="1.217104854s" podCreationTimestamp="2026-02-24 10:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:12:56.209590029 +0000 UTC m=+1080.666112582" watchObservedRunningTime="2026-02-24 10:12:56.217104854 +0000 UTC m=+1080.673627417" Feb 24 10:12:57 crc kubenswrapper[4755]: I0224 10:12:57.195428 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d" event={"ID":"4014d566-c8c7-41e2-bb30-a68eb7010700","Type":"ContainerStarted","Data":"81cb33207362b5dc6480f1983da545b866d8d0129135fbe90e99d8862b118e4d"} Feb 24 10:12:59 crc kubenswrapper[4755]: I0224 10:12:59.213263 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw" event={"ID":"a50b7c22-4e8e-4313-99c0-92c358ee6427","Type":"ContainerStarted","Data":"ce260d15d3c84bd44897b81793d31726551e5486017e3c5fbee0e825612eb59a"} Feb 24 10:12:59 crc kubenswrapper[4755]: I0224 10:12:59.216812 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-clbc6" event={"ID":"48004dcc-7cc0-427c-9ec1-4770cc4712ce","Type":"ContainerStarted","Data":"b946e11458ab06c893da94be8fa9f05d2fe6b50311a855cb873ada6c91e076bd"} Feb 24 10:12:59 crc kubenswrapper[4755]: I0224 10:12:59.218304 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d" event={"ID":"4014d566-c8c7-41e2-bb30-a68eb7010700","Type":"ContainerStarted","Data":"541ed6f7fe3e27bcdadefecb176587ab49a075f8adecae051ec993923ae94225"} Feb 24 10:12:59 crc kubenswrapper[4755]: I0224 10:12:59.218475 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d" Feb 24 10:12:59 crc kubenswrapper[4755]: I0224 10:12:59.219937 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-z46dq" event={"ID":"e599031c-4dc6-4f48-bd1b-f9ace444164b","Type":"ContainerStarted","Data":"77fd39a4b7a17eac325bc75327015eb20522c25bbe867194caa22fdc9e579c13"} Feb 24 10:12:59 crc kubenswrapper[4755]: I0224 10:12:59.220245 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:12:59 crc kubenswrapper[4755]: I0224 10:12:59.246917 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5glvw" podStartSLOduration=1.801549756 podStartE2EDuration="4.246885831s" podCreationTimestamp="2026-02-24 10:12:55 +0000 UTC" firstStartedPulling="2026-02-24 10:12:55.596665955 +0000 UTC m=+1080.053188498" lastFinishedPulling="2026-02-24 10:12:58.04200203 +0000 UTC m=+1082.498524573" observedRunningTime="2026-02-24 10:12:59.232193291 +0000 UTC m=+1083.688715904" watchObservedRunningTime="2026-02-24 10:12:59.246885831 +0000 UTC m=+1083.703408404" Feb 24 10:12:59 crc kubenswrapper[4755]: I0224 10:12:59.256638 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d" podStartSLOduration=3.391129919 podStartE2EDuration="5.256622836s" podCreationTimestamp="2026-02-24 10:12:54 +0000 UTC" firstStartedPulling="2026-02-24 10:12:56.196049755 +0000 UTC m=+1080.652572308" lastFinishedPulling="2026-02-24 10:12:58.061542682 +0000 UTC m=+1082.518065225" observedRunningTime="2026-02-24 10:12:59.252014312 +0000 UTC m=+1083.708536885" watchObservedRunningTime="2026-02-24 10:12:59.256622836 +0000 UTC m=+1083.713145379" Feb 24 10:12:59 crc kubenswrapper[4755]: I0224 10:12:59.274221 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-z46dq" podStartSLOduration=2.5965034510000002 podStartE2EDuration="5.274189756s" podCreationTimestamp="2026-02-24 10:12:54 +0000 UTC" firstStartedPulling="2026-02-24 10:12:55.382748663 +0000 UTC m=+1079.839271206" lastFinishedPulling="2026-02-24 10:12:58.060434928 +0000 UTC m=+1082.516957511" observedRunningTime="2026-02-24 10:12:59.265915087 +0000 UTC m=+1083.722437660" watchObservedRunningTime="2026-02-24 10:12:59.274189756 +0000 UTC m=+1083.730712339" Feb 24 10:13:00 crc kubenswrapper[4755]: E0224 10:13:00.349667 4755 certificate_manager.go:579] "Unhandled Error" err="kubernetes.io/kubelet-serving: certificate request was not signed: timed out waiting for the condition" logger="UnhandledError" Feb 24 10:13:01 crc kubenswrapper[4755]: I0224 10:13:01.238645 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-clbc6" event={"ID":"48004dcc-7cc0-427c-9ec1-4770cc4712ce","Type":"ContainerStarted","Data":"ac7ed89fd44bcac5811ea1808e22f875a207fb5fda28f91e33b23d540474ea0d"} Feb 24 10:13:01 crc kubenswrapper[4755]: I0224 10:13:01.262397 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-clbc6" podStartSLOduration=2.942328306 podStartE2EDuration="7.262371679s" podCreationTimestamp="2026-02-24 10:12:54 +0000 UTC" firstStartedPulling="2026-02-24 10:12:55.767554129 +0000 UTC m=+1080.224076672" lastFinishedPulling="2026-02-24 10:13:00.087597472 +0000 UTC m=+1084.544120045" observedRunningTime="2026-02-24 10:13:01.256737872 +0000 UTC m=+1085.713260435" watchObservedRunningTime="2026-02-24 10:13:01.262371679 +0000 UTC m=+1085.718894232" Feb 24 10:13:02 crc kubenswrapper[4755]: I0224 10:13:02.509616 4755 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 10:13:02 crc kubenswrapper[4755]: I0224 10:13:02.520736 4755 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 10:13:02 crc kubenswrapper[4755]: I0224 10:13:02.543823 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48580: no serving certificate available for the kubelet" Feb 24 10:13:02 crc kubenswrapper[4755]: I0224 10:13:02.579335 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48590: no serving certificate available for the kubelet" Feb 24 10:13:02 crc kubenswrapper[4755]: I0224 10:13:02.610787 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48596: no serving certificate available for the kubelet" Feb 24 10:13:02 crc kubenswrapper[4755]: I0224 10:13:02.650820 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48600: no serving certificate available for the kubelet" Feb 24 10:13:02 crc kubenswrapper[4755]: I0224 10:13:02.719650 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48616: no serving certificate available for the kubelet" Feb 24 10:13:02 crc kubenswrapper[4755]: I0224 10:13:02.831820 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48622: no serving certificate available for the kubelet" Feb 24 10:13:03 crc kubenswrapper[4755]: I0224 10:13:03.026455 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48624: no serving certificate available for the kubelet" Feb 24 10:13:03 crc kubenswrapper[4755]: I0224 10:13:03.373630 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48636: no serving certificate available for the kubelet" Feb 24 10:13:04 crc kubenswrapper[4755]: I0224 10:13:04.043322 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59364: no serving certificate available for the kubelet" Feb 24 10:13:05 crc kubenswrapper[4755]: I0224 10:13:05.358205 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59366: no serving certificate available for the kubelet" Feb 24 10:13:05 crc kubenswrapper[4755]: I0224 10:13:05.383037 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-z46dq" Feb 24 10:13:05 crc kubenswrapper[4755]: I0224 10:13:05.606293 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:13:05 crc kubenswrapper[4755]: I0224 10:13:05.606381 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:13:05 crc kubenswrapper[4755]: I0224 10:13:05.612097 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:13:06 crc kubenswrapper[4755]: I0224 10:13:06.276989 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5f95567bbf-vrvbm" Feb 24 10:13:06 crc kubenswrapper[4755]: I0224 10:13:06.339326 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fqscc"] Feb 24 10:13:07 crc kubenswrapper[4755]: I0224 10:13:07.946543 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59380: no serving certificate available for the kubelet" Feb 24 10:13:13 crc kubenswrapper[4755]: I0224 10:13:13.103641 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59394: no serving certificate available for the kubelet" Feb 24 10:13:15 crc kubenswrapper[4755]: I0224 10:13:15.942367 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ldl9d" Feb 24 10:13:23 crc kubenswrapper[4755]: I0224 10:13:23.375176 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42362: no serving certificate available for the kubelet" Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.219390 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2"] Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.221882 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.224509 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.227823 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2"] Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.337010 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3b0411d-e378-47ba-b192-4b69dc317737-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2\" (UID: \"e3b0411d-e378-47ba-b192-4b69dc317737\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.337216 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7j4j\" (UniqueName: \"kubernetes.io/projected/e3b0411d-e378-47ba-b192-4b69dc317737-kube-api-access-l7j4j\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2\" (UID: \"e3b0411d-e378-47ba-b192-4b69dc317737\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.337291 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3b0411d-e378-47ba-b192-4b69dc317737-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2\" (UID: \"e3b0411d-e378-47ba-b192-4b69dc317737\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.438348 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3b0411d-e378-47ba-b192-4b69dc317737-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2\" (UID: \"e3b0411d-e378-47ba-b192-4b69dc317737\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.438425 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7j4j\" (UniqueName: \"kubernetes.io/projected/e3b0411d-e378-47ba-b192-4b69dc317737-kube-api-access-l7j4j\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2\" (UID: \"e3b0411d-e378-47ba-b192-4b69dc317737\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.438454 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3b0411d-e378-47ba-b192-4b69dc317737-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2\" (UID: \"e3b0411d-e378-47ba-b192-4b69dc317737\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.439311 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3b0411d-e378-47ba-b192-4b69dc317737-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2\" (UID: \"e3b0411d-e378-47ba-b192-4b69dc317737\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.439316 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3b0411d-e378-47ba-b192-4b69dc317737-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2\" (UID: \"e3b0411d-e378-47ba-b192-4b69dc317737\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.473579 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7j4j\" (UniqueName: \"kubernetes.io/projected/e3b0411d-e378-47ba-b192-4b69dc317737-kube-api-access-l7j4j\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2\" (UID: \"e3b0411d-e378-47ba-b192-4b69dc317737\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.552346 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" Feb 24 10:13:30 crc kubenswrapper[4755]: I0224 10:13:30.887097 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2"] Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.403170 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fqscc" podUID="c15ecede-c840-4fc8-bc38-a970796c9517" containerName="console" containerID="cri-o://96a4b69f601bc685b906f4cacf5e70c36c7d9bac9b10756a48e0e4deb9d59523" gracePeriod=15 Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.446310 4755 generic.go:334] "Generic (PLEG): container finished" podID="e3b0411d-e378-47ba-b192-4b69dc317737" containerID="ede481111b8b90dca75260e0b68f28568129ab5001cd6123e9927df6f89f4298" exitCode=0 Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.446381 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" event={"ID":"e3b0411d-e378-47ba-b192-4b69dc317737","Type":"ContainerDied","Data":"ede481111b8b90dca75260e0b68f28568129ab5001cd6123e9927df6f89f4298"} Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.446428 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" event={"ID":"e3b0411d-e378-47ba-b192-4b69dc317737","Type":"ContainerStarted","Data":"f9e1358051f0f1599fbd923f6659161b9cc64ad24d00d90c597084c18b929fa4"} Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.849191 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fqscc_c15ecede-c840-4fc8-bc38-a970796c9517/console/0.log" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.849507 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fqscc" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.889829 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-service-ca\") pod \"c15ecede-c840-4fc8-bc38-a970796c9517\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.889873 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c15ecede-c840-4fc8-bc38-a970796c9517-console-oauth-config\") pod \"c15ecede-c840-4fc8-bc38-a970796c9517\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.889899 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c15ecede-c840-4fc8-bc38-a970796c9517-console-serving-cert\") pod \"c15ecede-c840-4fc8-bc38-a970796c9517\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.889924 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjdpw\" (UniqueName: \"kubernetes.io/projected/c15ecede-c840-4fc8-bc38-a970796c9517-kube-api-access-kjdpw\") pod \"c15ecede-c840-4fc8-bc38-a970796c9517\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.890008 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-console-config\") pod \"c15ecede-c840-4fc8-bc38-a970796c9517\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.890039 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-oauth-serving-cert\") pod \"c15ecede-c840-4fc8-bc38-a970796c9517\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.890083 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-trusted-ca-bundle\") pod \"c15ecede-c840-4fc8-bc38-a970796c9517\" (UID: \"c15ecede-c840-4fc8-bc38-a970796c9517\") " Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.890877 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-service-ca" (OuterVolumeSpecName: "service-ca") pod "c15ecede-c840-4fc8-bc38-a970796c9517" (UID: "c15ecede-c840-4fc8-bc38-a970796c9517"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.890871 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c15ecede-c840-4fc8-bc38-a970796c9517" (UID: "c15ecede-c840-4fc8-bc38-a970796c9517"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.890927 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-console-config" (OuterVolumeSpecName: "console-config") pod "c15ecede-c840-4fc8-bc38-a970796c9517" (UID: "c15ecede-c840-4fc8-bc38-a970796c9517"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.891148 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c15ecede-c840-4fc8-bc38-a970796c9517" (UID: "c15ecede-c840-4fc8-bc38-a970796c9517"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.896851 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c15ecede-c840-4fc8-bc38-a970796c9517-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c15ecede-c840-4fc8-bc38-a970796c9517" (UID: "c15ecede-c840-4fc8-bc38-a970796c9517"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.897405 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c15ecede-c840-4fc8-bc38-a970796c9517-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c15ecede-c840-4fc8-bc38-a970796c9517" (UID: "c15ecede-c840-4fc8-bc38-a970796c9517"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.897697 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15ecede-c840-4fc8-bc38-a970796c9517-kube-api-access-kjdpw" (OuterVolumeSpecName: "kube-api-access-kjdpw") pod "c15ecede-c840-4fc8-bc38-a970796c9517" (UID: "c15ecede-c840-4fc8-bc38-a970796c9517"). InnerVolumeSpecName "kube-api-access-kjdpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.991506 4755 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-console-config\") on node \"crc\" DevicePath \"\"" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.991548 4755 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.991561 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.991572 4755 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c15ecede-c840-4fc8-bc38-a970796c9517-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.991584 4755 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c15ecede-c840-4fc8-bc38-a970796c9517-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.991595 4755 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c15ecede-c840-4fc8-bc38-a970796c9517-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 10:13:31 crc kubenswrapper[4755]: I0224 10:13:31.991608 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjdpw\" (UniqueName: \"kubernetes.io/projected/c15ecede-c840-4fc8-bc38-a970796c9517-kube-api-access-kjdpw\") on node \"crc\" DevicePath \"\"" Feb 24 10:13:32 crc kubenswrapper[4755]: I0224 10:13:32.459403 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fqscc_c15ecede-c840-4fc8-bc38-a970796c9517/console/0.log" Feb 24 10:13:32 crc kubenswrapper[4755]: I0224 10:13:32.459496 4755 generic.go:334] "Generic (PLEG): container finished" podID="c15ecede-c840-4fc8-bc38-a970796c9517" containerID="96a4b69f601bc685b906f4cacf5e70c36c7d9bac9b10756a48e0e4deb9d59523" exitCode=2 Feb 24 10:13:32 crc kubenswrapper[4755]: I0224 10:13:32.459549 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fqscc" event={"ID":"c15ecede-c840-4fc8-bc38-a970796c9517","Type":"ContainerDied","Data":"96a4b69f601bc685b906f4cacf5e70c36c7d9bac9b10756a48e0e4deb9d59523"} Feb 24 10:13:32 crc kubenswrapper[4755]: I0224 10:13:32.459600 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fqscc" event={"ID":"c15ecede-c840-4fc8-bc38-a970796c9517","Type":"ContainerDied","Data":"4768d6648673b14ca2b36b7cfa861af87a5db60d2b61ff1e55ec7e5f05e8cfaf"} Feb 24 10:13:32 crc kubenswrapper[4755]: I0224 10:13:32.459611 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fqscc" Feb 24 10:13:32 crc kubenswrapper[4755]: I0224 10:13:32.459637 4755 scope.go:117] "RemoveContainer" containerID="96a4b69f601bc685b906f4cacf5e70c36c7d9bac9b10756a48e0e4deb9d59523" Feb 24 10:13:32 crc kubenswrapper[4755]: I0224 10:13:32.488910 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fqscc"] Feb 24 10:13:32 crc kubenswrapper[4755]: I0224 10:13:32.496332 4755 scope.go:117] "RemoveContainer" containerID="96a4b69f601bc685b906f4cacf5e70c36c7d9bac9b10756a48e0e4deb9d59523" Feb 24 10:13:32 crc kubenswrapper[4755]: E0224 10:13:32.496971 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96a4b69f601bc685b906f4cacf5e70c36c7d9bac9b10756a48e0e4deb9d59523\": container with ID starting with 96a4b69f601bc685b906f4cacf5e70c36c7d9bac9b10756a48e0e4deb9d59523 not found: ID does not exist" containerID="96a4b69f601bc685b906f4cacf5e70c36c7d9bac9b10756a48e0e4deb9d59523" Feb 24 10:13:32 crc kubenswrapper[4755]: I0224 10:13:32.497027 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96a4b69f601bc685b906f4cacf5e70c36c7d9bac9b10756a48e0e4deb9d59523"} err="failed to get container status \"96a4b69f601bc685b906f4cacf5e70c36c7d9bac9b10756a48e0e4deb9d59523\": rpc error: code = NotFound desc = could not find container \"96a4b69f601bc685b906f4cacf5e70c36c7d9bac9b10756a48e0e4deb9d59523\": container with ID starting with 96a4b69f601bc685b906f4cacf5e70c36c7d9bac9b10756a48e0e4deb9d59523 not found: ID does not exist" Feb 24 10:13:32 crc kubenswrapper[4755]: I0224 10:13:32.497727 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fqscc"] Feb 24 10:13:33 crc kubenswrapper[4755]: I0224 10:13:33.473207 4755 generic.go:334] "Generic (PLEG): container finished" podID="e3b0411d-e378-47ba-b192-4b69dc317737" containerID="f07711bcbc806b3adf4f2ebe8e98ce167820a52fcb13267aeea8ad1ae69ca70b" exitCode=0 Feb 24 10:13:33 crc kubenswrapper[4755]: I0224 10:13:33.473267 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" event={"ID":"e3b0411d-e378-47ba-b192-4b69dc317737","Type":"ContainerDied","Data":"f07711bcbc806b3adf4f2ebe8e98ce167820a52fcb13267aeea8ad1ae69ca70b"} Feb 24 10:13:34 crc kubenswrapper[4755]: I0224 10:13:34.326572 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c15ecede-c840-4fc8-bc38-a970796c9517" path="/var/lib/kubelet/pods/c15ecede-c840-4fc8-bc38-a970796c9517/volumes" Feb 24 10:13:35 crc kubenswrapper[4755]: I0224 10:13:35.125943 4755 generic.go:334] "Generic (PLEG): container finished" podID="e3b0411d-e378-47ba-b192-4b69dc317737" containerID="4ccd8147718bf8cf2cc6bc14ff5fb1bd73445f1261774fcc2f7abd1539e59a77" exitCode=0 Feb 24 10:13:35 crc kubenswrapper[4755]: I0224 10:13:35.126017 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" event={"ID":"e3b0411d-e378-47ba-b192-4b69dc317737","Type":"ContainerDied","Data":"4ccd8147718bf8cf2cc6bc14ff5fb1bd73445f1261774fcc2f7abd1539e59a77"} Feb 24 10:13:36 crc kubenswrapper[4755]: I0224 10:13:36.471328 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" Feb 24 10:13:36 crc kubenswrapper[4755]: I0224 10:13:36.531492 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7j4j\" (UniqueName: \"kubernetes.io/projected/e3b0411d-e378-47ba-b192-4b69dc317737-kube-api-access-l7j4j\") pod \"e3b0411d-e378-47ba-b192-4b69dc317737\" (UID: \"e3b0411d-e378-47ba-b192-4b69dc317737\") " Feb 24 10:13:36 crc kubenswrapper[4755]: I0224 10:13:36.531604 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3b0411d-e378-47ba-b192-4b69dc317737-util\") pod \"e3b0411d-e378-47ba-b192-4b69dc317737\" (UID: \"e3b0411d-e378-47ba-b192-4b69dc317737\") " Feb 24 10:13:36 crc kubenswrapper[4755]: I0224 10:13:36.531710 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3b0411d-e378-47ba-b192-4b69dc317737-bundle\") pod \"e3b0411d-e378-47ba-b192-4b69dc317737\" (UID: \"e3b0411d-e378-47ba-b192-4b69dc317737\") " Feb 24 10:13:36 crc kubenswrapper[4755]: I0224 10:13:36.532981 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b0411d-e378-47ba-b192-4b69dc317737-bundle" (OuterVolumeSpecName: "bundle") pod "e3b0411d-e378-47ba-b192-4b69dc317737" (UID: "e3b0411d-e378-47ba-b192-4b69dc317737"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:13:36 crc kubenswrapper[4755]: I0224 10:13:36.541810 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b0411d-e378-47ba-b192-4b69dc317737-kube-api-access-l7j4j" (OuterVolumeSpecName: "kube-api-access-l7j4j") pod "e3b0411d-e378-47ba-b192-4b69dc317737" (UID: "e3b0411d-e378-47ba-b192-4b69dc317737"). InnerVolumeSpecName "kube-api-access-l7j4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:13:36 crc kubenswrapper[4755]: I0224 10:13:36.564824 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b0411d-e378-47ba-b192-4b69dc317737-util" (OuterVolumeSpecName: "util") pod "e3b0411d-e378-47ba-b192-4b69dc317737" (UID: "e3b0411d-e378-47ba-b192-4b69dc317737"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:13:36 crc kubenswrapper[4755]: I0224 10:13:36.632770 4755 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3b0411d-e378-47ba-b192-4b69dc317737-util\") on node \"crc\" DevicePath \"\"" Feb 24 10:13:36 crc kubenswrapper[4755]: I0224 10:13:36.632827 4755 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3b0411d-e378-47ba-b192-4b69dc317737-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 10:13:36 crc kubenswrapper[4755]: I0224 10:13:36.632847 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7j4j\" (UniqueName: \"kubernetes.io/projected/e3b0411d-e378-47ba-b192-4b69dc317737-kube-api-access-l7j4j\") on node \"crc\" DevicePath \"\"" Feb 24 10:13:37 crc kubenswrapper[4755]: I0224 10:13:37.143802 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" event={"ID":"e3b0411d-e378-47ba-b192-4b69dc317737","Type":"ContainerDied","Data":"f9e1358051f0f1599fbd923f6659161b9cc64ad24d00d90c597084c18b929fa4"} Feb 24 10:13:37 crc kubenswrapper[4755]: I0224 10:13:37.144175 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e1358051f0f1599fbd923f6659161b9cc64ad24d00d90c597084c18b929fa4" Feb 24 10:13:37 crc kubenswrapper[4755]: I0224 10:13:37.143867 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137hxc2" Feb 24 10:13:43 crc kubenswrapper[4755]: I0224 10:13:43.882934 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36610: no serving certificate available for the kubelet" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.200355 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz"] Feb 24 10:13:45 crc kubenswrapper[4755]: E0224 10:13:45.200543 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15ecede-c840-4fc8-bc38-a970796c9517" containerName="console" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.200555 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15ecede-c840-4fc8-bc38-a970796c9517" containerName="console" Feb 24 10:13:45 crc kubenswrapper[4755]: E0224 10:13:45.200565 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b0411d-e378-47ba-b192-4b69dc317737" containerName="util" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.200570 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b0411d-e378-47ba-b192-4b69dc317737" containerName="util" Feb 24 10:13:45 crc kubenswrapper[4755]: E0224 10:13:45.200580 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b0411d-e378-47ba-b192-4b69dc317737" containerName="pull" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.200586 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b0411d-e378-47ba-b192-4b69dc317737" containerName="pull" Feb 24 10:13:45 crc kubenswrapper[4755]: E0224 10:13:45.200606 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b0411d-e378-47ba-b192-4b69dc317737" containerName="extract" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.200612 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b0411d-e378-47ba-b192-4b69dc317737" containerName="extract" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.200709 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15ecede-c840-4fc8-bc38-a970796c9517" containerName="console" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.200721 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b0411d-e378-47ba-b192-4b69dc317737" containerName="extract" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.201115 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.206466 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.207034 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.207041 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-c4nxd" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.208008 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.212915 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.247749 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ef974ab-d5e8-4319-bf13-24e435f5d3b6-webhook-cert\") pod \"metallb-operator-controller-manager-5dc8887c9c-424zz\" (UID: \"2ef974ab-d5e8-4319-bf13-24e435f5d3b6\") " pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.247819 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62bp7\" (UniqueName: \"kubernetes.io/projected/2ef974ab-d5e8-4319-bf13-24e435f5d3b6-kube-api-access-62bp7\") pod \"metallb-operator-controller-manager-5dc8887c9c-424zz\" (UID: \"2ef974ab-d5e8-4319-bf13-24e435f5d3b6\") " pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.247838 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ef974ab-d5e8-4319-bf13-24e435f5d3b6-apiservice-cert\") pod \"metallb-operator-controller-manager-5dc8887c9c-424zz\" (UID: \"2ef974ab-d5e8-4319-bf13-24e435f5d3b6\") " pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.254481 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz"] Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.348592 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ef974ab-d5e8-4319-bf13-24e435f5d3b6-webhook-cert\") pod \"metallb-operator-controller-manager-5dc8887c9c-424zz\" (UID: \"2ef974ab-d5e8-4319-bf13-24e435f5d3b6\") " pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.348657 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62bp7\" (UniqueName: \"kubernetes.io/projected/2ef974ab-d5e8-4319-bf13-24e435f5d3b6-kube-api-access-62bp7\") pod \"metallb-operator-controller-manager-5dc8887c9c-424zz\" (UID: \"2ef974ab-d5e8-4319-bf13-24e435f5d3b6\") " pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.348678 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ef974ab-d5e8-4319-bf13-24e435f5d3b6-apiservice-cert\") pod \"metallb-operator-controller-manager-5dc8887c9c-424zz\" (UID: \"2ef974ab-d5e8-4319-bf13-24e435f5d3b6\") " pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.359336 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ef974ab-d5e8-4319-bf13-24e435f5d3b6-apiservice-cert\") pod \"metallb-operator-controller-manager-5dc8887c9c-424zz\" (UID: \"2ef974ab-d5e8-4319-bf13-24e435f5d3b6\") " pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.362691 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ef974ab-d5e8-4319-bf13-24e435f5d3b6-webhook-cert\") pod \"metallb-operator-controller-manager-5dc8887c9c-424zz\" (UID: \"2ef974ab-d5e8-4319-bf13-24e435f5d3b6\") " pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.382694 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62bp7\" (UniqueName: \"kubernetes.io/projected/2ef974ab-d5e8-4319-bf13-24e435f5d3b6-kube-api-access-62bp7\") pod \"metallb-operator-controller-manager-5dc8887c9c-424zz\" (UID: \"2ef974ab-d5e8-4319-bf13-24e435f5d3b6\") " pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.447359 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh"] Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.448189 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.449950 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dchw8" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.452344 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.452916 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.464488 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh"] Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.514243 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.551118 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae9d9235-5c8b-433d-9432-25e0f570b9f9-apiservice-cert\") pod \"metallb-operator-webhook-server-6fb685767-7tqkh\" (UID: \"ae9d9235-5c8b-433d-9432-25e0f570b9f9\") " pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.551151 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae9d9235-5c8b-433d-9432-25e0f570b9f9-webhook-cert\") pod \"metallb-operator-webhook-server-6fb685767-7tqkh\" (UID: \"ae9d9235-5c8b-433d-9432-25e0f570b9f9\") " pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.551207 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkhgz\" (UniqueName: \"kubernetes.io/projected/ae9d9235-5c8b-433d-9432-25e0f570b9f9-kube-api-access-pkhgz\") pod \"metallb-operator-webhook-server-6fb685767-7tqkh\" (UID: \"ae9d9235-5c8b-433d-9432-25e0f570b9f9\") " pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.651857 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhgz\" (UniqueName: \"kubernetes.io/projected/ae9d9235-5c8b-433d-9432-25e0f570b9f9-kube-api-access-pkhgz\") pod \"metallb-operator-webhook-server-6fb685767-7tqkh\" (UID: \"ae9d9235-5c8b-433d-9432-25e0f570b9f9\") " pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.652274 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae9d9235-5c8b-433d-9432-25e0f570b9f9-apiservice-cert\") pod \"metallb-operator-webhook-server-6fb685767-7tqkh\" (UID: \"ae9d9235-5c8b-433d-9432-25e0f570b9f9\") " pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.652302 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae9d9235-5c8b-433d-9432-25e0f570b9f9-webhook-cert\") pod \"metallb-operator-webhook-server-6fb685767-7tqkh\" (UID: \"ae9d9235-5c8b-433d-9432-25e0f570b9f9\") " pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.657942 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae9d9235-5c8b-433d-9432-25e0f570b9f9-webhook-cert\") pod \"metallb-operator-webhook-server-6fb685767-7tqkh\" (UID: \"ae9d9235-5c8b-433d-9432-25e0f570b9f9\") " pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.657942 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae9d9235-5c8b-433d-9432-25e0f570b9f9-apiservice-cert\") pod \"metallb-operator-webhook-server-6fb685767-7tqkh\" (UID: \"ae9d9235-5c8b-433d-9432-25e0f570b9f9\") " pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.672829 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkhgz\" (UniqueName: \"kubernetes.io/projected/ae9d9235-5c8b-433d-9432-25e0f570b9f9-kube-api-access-pkhgz\") pod \"metallb-operator-webhook-server-6fb685767-7tqkh\" (UID: \"ae9d9235-5c8b-433d-9432-25e0f570b9f9\") " pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.723420 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz"] Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.761497 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" Feb 24 10:13:45 crc kubenswrapper[4755]: I0224 10:13:45.974240 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh"] Feb 24 10:13:45 crc kubenswrapper[4755]: W0224 10:13:45.979458 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae9d9235_5c8b_433d_9432_25e0f570b9f9.slice/crio-9cd5be31bfc4c18d3a3ce181a706526d2b3dea53b0089c54e8045209c91a19ad WatchSource:0}: Error finding container 9cd5be31bfc4c18d3a3ce181a706526d2b3dea53b0089c54e8045209c91a19ad: Status 404 returned error can't find the container with id 9cd5be31bfc4c18d3a3ce181a706526d2b3dea53b0089c54e8045209c91a19ad Feb 24 10:13:46 crc kubenswrapper[4755]: I0224 10:13:46.226680 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" event={"ID":"ae9d9235-5c8b-433d-9432-25e0f570b9f9","Type":"ContainerStarted","Data":"9cd5be31bfc4c18d3a3ce181a706526d2b3dea53b0089c54e8045209c91a19ad"} Feb 24 10:13:46 crc kubenswrapper[4755]: I0224 10:13:46.229564 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" event={"ID":"2ef974ab-d5e8-4319-bf13-24e435f5d3b6","Type":"ContainerStarted","Data":"63d088b20fe30d96babdaa352bc826f10b84e11b425769776bd1058939c1b29d"} Feb 24 10:13:51 crc kubenswrapper[4755]: I0224 10:13:51.261881 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" event={"ID":"ae9d9235-5c8b-433d-9432-25e0f570b9f9","Type":"ContainerStarted","Data":"707e3b221766d5a9d3af9035facef68ce49a04888f8a9c016ce974675ea40f54"} Feb 24 10:13:51 crc kubenswrapper[4755]: I0224 10:13:51.262752 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" Feb 24 10:13:51 crc kubenswrapper[4755]: I0224 10:13:51.286043 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" podStartSLOduration=1.645800211 podStartE2EDuration="6.286021328s" podCreationTimestamp="2026-02-24 10:13:45 +0000 UTC" firstStartedPulling="2026-02-24 10:13:45.980964654 +0000 UTC m=+1130.437487207" lastFinishedPulling="2026-02-24 10:13:50.621185781 +0000 UTC m=+1135.077708324" observedRunningTime="2026-02-24 10:13:51.281918998 +0000 UTC m=+1135.738441581" watchObservedRunningTime="2026-02-24 10:13:51.286021328 +0000 UTC m=+1135.742543871" Feb 24 10:13:56 crc kubenswrapper[4755]: I0224 10:13:56.296127 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" event={"ID":"2ef974ab-d5e8-4319-bf13-24e435f5d3b6","Type":"ContainerStarted","Data":"e0a9df912f0e0d1f26bff42ed441a7e25187406ad0271ee226157bd00fd4e428"} Feb 24 10:13:56 crc kubenswrapper[4755]: I0224 10:13:56.296591 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" Feb 24 10:13:56 crc kubenswrapper[4755]: I0224 10:13:56.330985 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" podStartSLOduration=1.601833342 podStartE2EDuration="11.330967207s" podCreationTimestamp="2026-02-24 10:13:45 +0000 UTC" firstStartedPulling="2026-02-24 10:13:45.730961168 +0000 UTC m=+1130.187483711" lastFinishedPulling="2026-02-24 10:13:55.460095023 +0000 UTC m=+1139.916617576" observedRunningTime="2026-02-24 10:13:56.326389541 +0000 UTC m=+1140.782912094" watchObservedRunningTime="2026-02-24 10:13:56.330967207 +0000 UTC m=+1140.787489750" Feb 24 10:14:05 crc kubenswrapper[4755]: I0224 10:14:05.765790 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6fb685767-7tqkh" Feb 24 10:14:24 crc kubenswrapper[4755]: I0224 10:14:24.880438 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50962: no serving certificate available for the kubelet" Feb 24 10:14:25 crc kubenswrapper[4755]: I0224 10:14:25.518571 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5dc8887c9c-424zz" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.365336 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-dsqgk"] Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.367558 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.370540 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.370982 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.371248 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gfhnz" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.377112 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67"] Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.378145 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.379804 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.385829 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67"] Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.420091 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/911bd7c3-481b-4759-9517-d4877fcccca0-reloader\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.420133 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/911bd7c3-481b-4759-9517-d4877fcccca0-frr-conf\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.420165 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/911bd7c3-481b-4759-9517-d4877fcccca0-frr-startup\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.420190 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/911bd7c3-481b-4759-9517-d4877fcccca0-metrics-certs\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.420220 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97a260b8-009a-4e77-90e0-41c607dd7eea-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-bzq67\" (UID: \"97a260b8-009a-4e77-90e0-41c607dd7eea\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.420260 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs762\" (UniqueName: \"kubernetes.io/projected/911bd7c3-481b-4759-9517-d4877fcccca0-kube-api-access-cs762\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.420326 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/911bd7c3-481b-4759-9517-d4877fcccca0-metrics\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.420349 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/911bd7c3-481b-4759-9517-d4877fcccca0-frr-sockets\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.420379 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxqlk\" (UniqueName: \"kubernetes.io/projected/97a260b8-009a-4e77-90e0-41c607dd7eea-kube-api-access-lxqlk\") pod \"frr-k8s-webhook-server-78b44bf5bb-bzq67\" (UID: \"97a260b8-009a-4e77-90e0-41c607dd7eea\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.452247 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-fr9z2"] Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.455002 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fr9z2" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.456869 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.457294 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ll65s" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.457494 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.457551 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.458798 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-jxpkq"] Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.464405 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-jxpkq" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.468680 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.502888 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-jxpkq"] Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522267 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afc404cd-9f2f-4a81-99a1-2889d32452db-cert\") pod \"controller-69bbfbf88f-jxpkq\" (UID: \"afc404cd-9f2f-4a81-99a1-2889d32452db\") " pod="metallb-system/controller-69bbfbf88f-jxpkq" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522304 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/911bd7c3-481b-4759-9517-d4877fcccca0-reloader\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522322 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/911bd7c3-481b-4759-9517-d4877fcccca0-frr-conf\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522341 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/911bd7c3-481b-4759-9517-d4877fcccca0-frr-startup\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522356 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/911bd7c3-481b-4759-9517-d4877fcccca0-metrics-certs\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522373 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kht56\" (UniqueName: \"kubernetes.io/projected/afc404cd-9f2f-4a81-99a1-2889d32452db-kube-api-access-kht56\") pod \"controller-69bbfbf88f-jxpkq\" (UID: \"afc404cd-9f2f-4a81-99a1-2889d32452db\") " pod="metallb-system/controller-69bbfbf88f-jxpkq" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522394 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97a260b8-009a-4e77-90e0-41c607dd7eea-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-bzq67\" (UID: \"97a260b8-009a-4e77-90e0-41c607dd7eea\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522409 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs762\" (UniqueName: \"kubernetes.io/projected/911bd7c3-481b-4759-9517-d4877fcccca0-kube-api-access-cs762\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522427 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-metrics-certs\") pod \"speaker-fr9z2\" (UID: \"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e\") " pod="metallb-system/speaker-fr9z2" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522451 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-metallb-excludel2\") pod \"speaker-fr9z2\" (UID: \"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e\") " pod="metallb-system/speaker-fr9z2" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522471 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/911bd7c3-481b-4759-9517-d4877fcccca0-metrics\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522487 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/911bd7c3-481b-4759-9517-d4877fcccca0-frr-sockets\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522503 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-memberlist\") pod \"speaker-fr9z2\" (UID: \"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e\") " pod="metallb-system/speaker-fr9z2" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522525 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxqlk\" (UniqueName: \"kubernetes.io/projected/97a260b8-009a-4e77-90e0-41c607dd7eea-kube-api-access-lxqlk\") pod \"frr-k8s-webhook-server-78b44bf5bb-bzq67\" (UID: \"97a260b8-009a-4e77-90e0-41c607dd7eea\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522546 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvs8j\" (UniqueName: \"kubernetes.io/projected/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-kube-api-access-gvs8j\") pod \"speaker-fr9z2\" (UID: \"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e\") " pod="metallb-system/speaker-fr9z2" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.522564 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afc404cd-9f2f-4a81-99a1-2889d32452db-metrics-certs\") pod \"controller-69bbfbf88f-jxpkq\" (UID: \"afc404cd-9f2f-4a81-99a1-2889d32452db\") " pod="metallb-system/controller-69bbfbf88f-jxpkq" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.523106 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/911bd7c3-481b-4759-9517-d4877fcccca0-reloader\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.523341 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/911bd7c3-481b-4759-9517-d4877fcccca0-frr-conf\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.523984 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/911bd7c3-481b-4759-9517-d4877fcccca0-frr-startup\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: E0224 10:14:26.524573 4755 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 24 10:14:26 crc kubenswrapper[4755]: E0224 10:14:26.524621 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97a260b8-009a-4e77-90e0-41c607dd7eea-cert podName:97a260b8-009a-4e77-90e0-41c607dd7eea nodeName:}" failed. No retries permitted until 2026-02-24 10:14:27.024606809 +0000 UTC m=+1171.481129352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97a260b8-009a-4e77-90e0-41c607dd7eea-cert") pod "frr-k8s-webhook-server-78b44bf5bb-bzq67" (UID: "97a260b8-009a-4e77-90e0-41c607dd7eea") : secret "frr-k8s-webhook-server-cert" not found Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.524904 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/911bd7c3-481b-4759-9517-d4877fcccca0-metrics\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.525132 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/911bd7c3-481b-4759-9517-d4877fcccca0-frr-sockets\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.544686 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/911bd7c3-481b-4759-9517-d4877fcccca0-metrics-certs\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.547793 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs762\" (UniqueName: \"kubernetes.io/projected/911bd7c3-481b-4759-9517-d4877fcccca0-kube-api-access-cs762\") pod \"frr-k8s-dsqgk\" (UID: \"911bd7c3-481b-4759-9517-d4877fcccca0\") " pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.547990 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxqlk\" (UniqueName: \"kubernetes.io/projected/97a260b8-009a-4e77-90e0-41c607dd7eea-kube-api-access-lxqlk\") pod \"frr-k8s-webhook-server-78b44bf5bb-bzq67\" (UID: \"97a260b8-009a-4e77-90e0-41c607dd7eea\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.623920 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afc404cd-9f2f-4a81-99a1-2889d32452db-metrics-certs\") pod \"controller-69bbfbf88f-jxpkq\" (UID: \"afc404cd-9f2f-4a81-99a1-2889d32452db\") " pod="metallb-system/controller-69bbfbf88f-jxpkq" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.623987 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afc404cd-9f2f-4a81-99a1-2889d32452db-cert\") pod \"controller-69bbfbf88f-jxpkq\" (UID: \"afc404cd-9f2f-4a81-99a1-2889d32452db\") " pod="metallb-system/controller-69bbfbf88f-jxpkq" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.624034 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kht56\" (UniqueName: \"kubernetes.io/projected/afc404cd-9f2f-4a81-99a1-2889d32452db-kube-api-access-kht56\") pod \"controller-69bbfbf88f-jxpkq\" (UID: \"afc404cd-9f2f-4a81-99a1-2889d32452db\") " pod="metallb-system/controller-69bbfbf88f-jxpkq" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.624104 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-metrics-certs\") pod \"speaker-fr9z2\" (UID: \"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e\") " pod="metallb-system/speaker-fr9z2" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.624141 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-metallb-excludel2\") pod \"speaker-fr9z2\" (UID: \"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e\") " pod="metallb-system/speaker-fr9z2" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.624173 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-memberlist\") pod \"speaker-fr9z2\" (UID: \"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e\") " pod="metallb-system/speaker-fr9z2" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.624212 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvs8j\" (UniqueName: \"kubernetes.io/projected/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-kube-api-access-gvs8j\") pod \"speaker-fr9z2\" (UID: \"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e\") " pod="metallb-system/speaker-fr9z2" Feb 24 10:14:26 crc kubenswrapper[4755]: E0224 10:14:26.624595 4755 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 24 10:14:26 crc kubenswrapper[4755]: E0224 10:14:26.626223 4755 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.626285 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-metallb-excludel2\") pod \"speaker-fr9z2\" (UID: \"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e\") " pod="metallb-system/speaker-fr9z2" Feb 24 10:14:26 crc kubenswrapper[4755]: E0224 10:14:26.626305 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-memberlist podName:c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e nodeName:}" failed. No retries permitted until 2026-02-24 10:14:27.12628514 +0000 UTC m=+1171.582807683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-memberlist") pod "speaker-fr9z2" (UID: "c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e") : secret "metallb-memberlist" not found Feb 24 10:14:26 crc kubenswrapper[4755]: E0224 10:14:26.627498 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afc404cd-9f2f-4a81-99a1-2889d32452db-metrics-certs podName:afc404cd-9f2f-4a81-99a1-2889d32452db nodeName:}" failed. No retries permitted until 2026-02-24 10:14:27.127475418 +0000 UTC m=+1171.583997971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/afc404cd-9f2f-4a81-99a1-2889d32452db-metrics-certs") pod "controller-69bbfbf88f-jxpkq" (UID: "afc404cd-9f2f-4a81-99a1-2889d32452db") : secret "controller-certs-secret" not found Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.629198 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.630728 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-metrics-certs\") pod \"speaker-fr9z2\" (UID: \"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e\") " pod="metallb-system/speaker-fr9z2" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.639281 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afc404cd-9f2f-4a81-99a1-2889d32452db-cert\") pod \"controller-69bbfbf88f-jxpkq\" (UID: \"afc404cd-9f2f-4a81-99a1-2889d32452db\") " pod="metallb-system/controller-69bbfbf88f-jxpkq" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.646402 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvs8j\" (UniqueName: \"kubernetes.io/projected/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-kube-api-access-gvs8j\") pod \"speaker-fr9z2\" (UID: \"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e\") " pod="metallb-system/speaker-fr9z2" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.648396 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kht56\" (UniqueName: \"kubernetes.io/projected/afc404cd-9f2f-4a81-99a1-2889d32452db-kube-api-access-kht56\") pod \"controller-69bbfbf88f-jxpkq\" (UID: \"afc404cd-9f2f-4a81-99a1-2889d32452db\") " pod="metallb-system/controller-69bbfbf88f-jxpkq" Feb 24 10:14:26 crc kubenswrapper[4755]: I0224 10:14:26.693985 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:27 crc kubenswrapper[4755]: I0224 10:14:27.029947 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97a260b8-009a-4e77-90e0-41c607dd7eea-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-bzq67\" (UID: \"97a260b8-009a-4e77-90e0-41c607dd7eea\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67" Feb 24 10:14:27 crc kubenswrapper[4755]: I0224 10:14:27.036805 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97a260b8-009a-4e77-90e0-41c607dd7eea-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-bzq67\" (UID: \"97a260b8-009a-4e77-90e0-41c607dd7eea\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67" Feb 24 10:14:27 crc kubenswrapper[4755]: I0224 10:14:27.130793 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-memberlist\") pod \"speaker-fr9z2\" (UID: \"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e\") " pod="metallb-system/speaker-fr9z2" Feb 24 10:14:27 crc kubenswrapper[4755]: I0224 10:14:27.130906 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afc404cd-9f2f-4a81-99a1-2889d32452db-metrics-certs\") pod \"controller-69bbfbf88f-jxpkq\" (UID: \"afc404cd-9f2f-4a81-99a1-2889d32452db\") " pod="metallb-system/controller-69bbfbf88f-jxpkq" Feb 24 10:14:27 crc kubenswrapper[4755]: E0224 10:14:27.131863 4755 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 24 10:14:27 crc kubenswrapper[4755]: E0224 10:14:27.131958 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-memberlist podName:c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e nodeName:}" failed. No retries permitted until 2026-02-24 10:14:28.13193161 +0000 UTC m=+1172.588454183 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-memberlist") pod "speaker-fr9z2" (UID: "c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e") : secret "metallb-memberlist" not found Feb 24 10:14:27 crc kubenswrapper[4755]: I0224 10:14:27.137822 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afc404cd-9f2f-4a81-99a1-2889d32452db-metrics-certs\") pod \"controller-69bbfbf88f-jxpkq\" (UID: \"afc404cd-9f2f-4a81-99a1-2889d32452db\") " pod="metallb-system/controller-69bbfbf88f-jxpkq" Feb 24 10:14:27 crc kubenswrapper[4755]: I0224 10:14:27.307922 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67" Feb 24 10:14:27 crc kubenswrapper[4755]: I0224 10:14:27.392934 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-jxpkq" Feb 24 10:14:27 crc kubenswrapper[4755]: I0224 10:14:27.515876 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsqgk" event={"ID":"911bd7c3-481b-4759-9517-d4877fcccca0","Type":"ContainerStarted","Data":"e4a013e6a443a8916e4392d540320753723bab93f44e5482d1948d77b185cf6e"} Feb 24 10:14:27 crc kubenswrapper[4755]: I0224 10:14:27.691379 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67"] Feb 24 10:14:27 crc kubenswrapper[4755]: I0224 10:14:27.730432 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-jxpkq"] Feb 24 10:14:27 crc kubenswrapper[4755]: W0224 10:14:27.738006 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafc404cd_9f2f_4a81_99a1_2889d32452db.slice/crio-3c1f37a5f1cc00a7ebf5926ec47ce502396544294d49b41b3c5835dbd562fbbf WatchSource:0}: Error finding container 3c1f37a5f1cc00a7ebf5926ec47ce502396544294d49b41b3c5835dbd562fbbf: Status 404 returned error can't find the container with id 3c1f37a5f1cc00a7ebf5926ec47ce502396544294d49b41b3c5835dbd562fbbf Feb 24 10:14:28 crc kubenswrapper[4755]: I0224 10:14:28.145508 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-memberlist\") pod \"speaker-fr9z2\" (UID: \"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e\") " pod="metallb-system/speaker-fr9z2" Feb 24 10:14:28 crc kubenswrapper[4755]: I0224 10:14:28.152439 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e-memberlist\") pod \"speaker-fr9z2\" (UID: \"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e\") " pod="metallb-system/speaker-fr9z2" Feb 24 10:14:28 crc kubenswrapper[4755]: I0224 10:14:28.272790 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fr9z2" Feb 24 10:14:28 crc kubenswrapper[4755]: W0224 10:14:28.299199 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7d1d455_aafa_4fe4_b2d6_ce51dcb19e8e.slice/crio-cd6d25e41e5a4d0342e89abbce252a0b704cbe6f06b8a2a39152d15b07c1bd0f WatchSource:0}: Error finding container cd6d25e41e5a4d0342e89abbce252a0b704cbe6f06b8a2a39152d15b07c1bd0f: Status 404 returned error can't find the container with id cd6d25e41e5a4d0342e89abbce252a0b704cbe6f06b8a2a39152d15b07c1bd0f Feb 24 10:14:28 crc kubenswrapper[4755]: I0224 10:14:28.526258 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67" event={"ID":"97a260b8-009a-4e77-90e0-41c607dd7eea","Type":"ContainerStarted","Data":"632aef022f2b24fa44f34bf031379c7d7711e2d3df3f5b1430a777017052eadb"} Feb 24 10:14:28 crc kubenswrapper[4755]: I0224 10:14:28.528613 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jxpkq" event={"ID":"afc404cd-9f2f-4a81-99a1-2889d32452db","Type":"ContainerStarted","Data":"622e3cb7857e032023c6777cfb82677bef79f764d435fc4717ded6f514d66314"} Feb 24 10:14:28 crc kubenswrapper[4755]: I0224 10:14:28.528661 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jxpkq" event={"ID":"afc404cd-9f2f-4a81-99a1-2889d32452db","Type":"ContainerStarted","Data":"7d017e19a54169d50e4d0edbae67a65597cf86126e84afe80be5998267a5feee"} Feb 24 10:14:28 crc kubenswrapper[4755]: I0224 10:14:28.528689 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jxpkq" event={"ID":"afc404cd-9f2f-4a81-99a1-2889d32452db","Type":"ContainerStarted","Data":"3c1f37a5f1cc00a7ebf5926ec47ce502396544294d49b41b3c5835dbd562fbbf"} Feb 24 10:14:28 crc kubenswrapper[4755]: I0224 10:14:28.528845 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-jxpkq" Feb 24 10:14:28 crc kubenswrapper[4755]: I0224 10:14:28.529219 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fr9z2" event={"ID":"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e","Type":"ContainerStarted","Data":"cd6d25e41e5a4d0342e89abbce252a0b704cbe6f06b8a2a39152d15b07c1bd0f"} Feb 24 10:14:28 crc kubenswrapper[4755]: I0224 10:14:28.552663 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-jxpkq" podStartSLOduration=2.5526435579999998 podStartE2EDuration="2.552643558s" podCreationTimestamp="2026-02-24 10:14:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:14:28.551390449 +0000 UTC m=+1173.007913002" watchObservedRunningTime="2026-02-24 10:14:28.552643558 +0000 UTC m=+1173.009166111" Feb 24 10:14:29 crc kubenswrapper[4755]: I0224 10:14:29.538553 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fr9z2" event={"ID":"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e","Type":"ContainerStarted","Data":"c041cfbb1336f882f0a3f9a984a6692e8a8d9b4f05e4c2c59d2ab3c126b64d5a"} Feb 24 10:14:29 crc kubenswrapper[4755]: I0224 10:14:29.538593 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fr9z2" event={"ID":"c7d1d455-aafa-4fe4-b2d6-ce51dcb19e8e","Type":"ContainerStarted","Data":"ac1a205169875832132284231ea5487264864415f4d0cc2aabb12bf8bbc7e862"} Feb 24 10:14:29 crc kubenswrapper[4755]: I0224 10:14:29.577744 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-fr9z2" podStartSLOduration=3.577728783 podStartE2EDuration="3.577728783s" podCreationTimestamp="2026-02-24 10:14:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:14:29.575905145 +0000 UTC m=+1174.032427688" watchObservedRunningTime="2026-02-24 10:14:29.577728783 +0000 UTC m=+1174.034251326" Feb 24 10:14:30 crc kubenswrapper[4755]: I0224 10:14:30.544300 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-fr9z2" Feb 24 10:14:34 crc kubenswrapper[4755]: I0224 10:14:34.581996 4755 generic.go:334] "Generic (PLEG): container finished" podID="911bd7c3-481b-4759-9517-d4877fcccca0" containerID="e662f0f8c06f569843dc3a7342d5ba5c6588d9c9d2318c59c56c0b2d26ca1c49" exitCode=0 Feb 24 10:14:34 crc kubenswrapper[4755]: I0224 10:14:34.582103 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsqgk" event={"ID":"911bd7c3-481b-4759-9517-d4877fcccca0","Type":"ContainerDied","Data":"e662f0f8c06f569843dc3a7342d5ba5c6588d9c9d2318c59c56c0b2d26ca1c49"} Feb 24 10:14:34 crc kubenswrapper[4755]: I0224 10:14:34.584655 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67" event={"ID":"97a260b8-009a-4e77-90e0-41c607dd7eea","Type":"ContainerStarted","Data":"2009b3b486c7b54153e0e81e837fb580edea3311c3e0ed1226fd872a9d121757"} Feb 24 10:14:34 crc kubenswrapper[4755]: I0224 10:14:34.584833 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67" Feb 24 10:14:34 crc kubenswrapper[4755]: I0224 10:14:34.658757 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67" podStartSLOduration=2.783758436 podStartE2EDuration="8.65873649s" podCreationTimestamp="2026-02-24 10:14:26 +0000 UTC" firstStartedPulling="2026-02-24 10:14:27.701553579 +0000 UTC m=+1172.158076122" lastFinishedPulling="2026-02-24 10:14:33.576531613 +0000 UTC m=+1178.033054176" observedRunningTime="2026-02-24 10:14:34.650604174 +0000 UTC m=+1179.107126727" watchObservedRunningTime="2026-02-24 10:14:34.65873649 +0000 UTC m=+1179.115259043" Feb 24 10:14:35 crc kubenswrapper[4755]: I0224 10:14:35.595791 4755 generic.go:334] "Generic (PLEG): container finished" podID="911bd7c3-481b-4759-9517-d4877fcccca0" containerID="6bd2cc9140fc44836fe745e447eaa56a3ede307cabb8ef17c0af10cfbee5bc3a" exitCode=0 Feb 24 10:14:35 crc kubenswrapper[4755]: I0224 10:14:35.595866 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsqgk" event={"ID":"911bd7c3-481b-4759-9517-d4877fcccca0","Type":"ContainerDied","Data":"6bd2cc9140fc44836fe745e447eaa56a3ede307cabb8ef17c0af10cfbee5bc3a"} Feb 24 10:14:36 crc kubenswrapper[4755]: I0224 10:14:36.607278 4755 generic.go:334] "Generic (PLEG): container finished" podID="911bd7c3-481b-4759-9517-d4877fcccca0" containerID="39ac20693f688fe9c85bf0f52dbc052819850b1b6af512c7e9e92a80e2eabbac" exitCode=0 Feb 24 10:14:36 crc kubenswrapper[4755]: I0224 10:14:36.607335 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsqgk" event={"ID":"911bd7c3-481b-4759-9517-d4877fcccca0","Type":"ContainerDied","Data":"39ac20693f688fe9c85bf0f52dbc052819850b1b6af512c7e9e92a80e2eabbac"} Feb 24 10:14:37 crc kubenswrapper[4755]: I0224 10:14:37.398408 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-jxpkq" Feb 24 10:14:37 crc kubenswrapper[4755]: I0224 10:14:37.617060 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsqgk" event={"ID":"911bd7c3-481b-4759-9517-d4877fcccca0","Type":"ContainerStarted","Data":"51e3165d61b32eb935d4498065f4a0ee12c8b308810596c5f675d150d845637d"} Feb 24 10:14:37 crc kubenswrapper[4755]: I0224 10:14:37.617147 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsqgk" event={"ID":"911bd7c3-481b-4759-9517-d4877fcccca0","Type":"ContainerStarted","Data":"7541de7ba48d62cb73ce45d85bd92c6ca1d6a1380c67e75d0470ec2cae730bb5"} Feb 24 10:14:37 crc kubenswrapper[4755]: I0224 10:14:37.617164 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsqgk" event={"ID":"911bd7c3-481b-4759-9517-d4877fcccca0","Type":"ContainerStarted","Data":"86f30fbd550e4a12f4f0317b092c8af9519dd85cd4de11e700e8be8b7095be13"} Feb 24 10:14:38 crc kubenswrapper[4755]: I0224 10:14:38.279350 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-fr9z2" Feb 24 10:14:38 crc kubenswrapper[4755]: I0224 10:14:38.632693 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsqgk" event={"ID":"911bd7c3-481b-4759-9517-d4877fcccca0","Type":"ContainerStarted","Data":"531c08d3dba15f64e552714c0eb8947762ba31b1e1022611d272325dab8e168c"} Feb 24 10:14:38 crc kubenswrapper[4755]: I0224 10:14:38.632757 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsqgk" event={"ID":"911bd7c3-481b-4759-9517-d4877fcccca0","Type":"ContainerStarted","Data":"84bebebfb2823d9a8e2fcef52fe14ed97e054db2a95c5dc17325d7f5316185d1"} Feb 24 10:14:38 crc kubenswrapper[4755]: I0224 10:14:38.632777 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsqgk" event={"ID":"911bd7c3-481b-4759-9517-d4877fcccca0","Type":"ContainerStarted","Data":"f6bdb413339d7fe25efa6e59752da137de2410a17589e70841b6c57d2d061dd0"} Feb 24 10:14:38 crc kubenswrapper[4755]: I0224 10:14:38.632962 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:38 crc kubenswrapper[4755]: I0224 10:14:38.663180 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-dsqgk" podStartSLOduration=5.912341064 podStartE2EDuration="12.663158938s" podCreationTimestamp="2026-02-24 10:14:26 +0000 UTC" firstStartedPulling="2026-02-24 10:14:26.818820581 +0000 UTC m=+1171.275343144" lastFinishedPulling="2026-02-24 10:14:33.569638465 +0000 UTC m=+1178.026161018" observedRunningTime="2026-02-24 10:14:38.662384163 +0000 UTC m=+1183.118906706" watchObservedRunningTime="2026-02-24 10:14:38.663158938 +0000 UTC m=+1183.119681521" Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.078198 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk"] Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.079793 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.082035 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.091281 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk"] Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.206562 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk\" (UID: \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.206675 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk\" (UID: \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.206768 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xtwh\" (UniqueName: \"kubernetes.io/projected/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-kube-api-access-9xtwh\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk\" (UID: \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.307527 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk\" (UID: \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.307626 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xtwh\" (UniqueName: \"kubernetes.io/projected/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-kube-api-access-9xtwh\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk\" (UID: \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.307746 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk\" (UID: \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.308155 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk\" (UID: \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.308295 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk\" (UID: \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.333154 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xtwh\" (UniqueName: \"kubernetes.io/projected/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-kube-api-access-9xtwh\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk\" (UID: \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.402104 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" Feb 24 10:14:40 crc kubenswrapper[4755]: I0224 10:14:40.875018 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk"] Feb 24 10:14:40 crc kubenswrapper[4755]: W0224 10:14:40.887391 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25ab792f_0e9a_4892_9b7c_ea3b08d0301b.slice/crio-5e18c4685201a52b3c8b23e4fcb1dc5864e05834255bc07e9c340811aec5f5f8 WatchSource:0}: Error finding container 5e18c4685201a52b3c8b23e4fcb1dc5864e05834255bc07e9c340811aec5f5f8: Status 404 returned error can't find the container with id 5e18c4685201a52b3c8b23e4fcb1dc5864e05834255bc07e9c340811aec5f5f8 Feb 24 10:14:41 crc kubenswrapper[4755]: E0224 10:14:41.218239 4755 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25ab792f_0e9a_4892_9b7c_ea3b08d0301b.slice/crio-43069339e90cd8e1eadf65efa17164ca762885e5ae9e5a88eeabbc493d2fd011.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25ab792f_0e9a_4892_9b7c_ea3b08d0301b.slice/crio-conmon-43069339e90cd8e1eadf65efa17164ca762885e5ae9e5a88eeabbc493d2fd011.scope\": RecentStats: unable to find data in memory cache]" Feb 24 10:14:41 crc kubenswrapper[4755]: I0224 10:14:41.656002 4755 generic.go:334] "Generic (PLEG): container finished" podID="25ab792f-0e9a-4892-9b7c-ea3b08d0301b" containerID="43069339e90cd8e1eadf65efa17164ca762885e5ae9e5a88eeabbc493d2fd011" exitCode=0 Feb 24 10:14:41 crc kubenswrapper[4755]: I0224 10:14:41.656089 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" event={"ID":"25ab792f-0e9a-4892-9b7c-ea3b08d0301b","Type":"ContainerDied","Data":"43069339e90cd8e1eadf65efa17164ca762885e5ae9e5a88eeabbc493d2fd011"} Feb 24 10:14:41 crc kubenswrapper[4755]: I0224 10:14:41.656161 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" event={"ID":"25ab792f-0e9a-4892-9b7c-ea3b08d0301b","Type":"ContainerStarted","Data":"5e18c4685201a52b3c8b23e4fcb1dc5864e05834255bc07e9c340811aec5f5f8"} Feb 24 10:14:41 crc kubenswrapper[4755]: I0224 10:14:41.694454 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:41 crc kubenswrapper[4755]: I0224 10:14:41.758816 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:45 crc kubenswrapper[4755]: I0224 10:14:45.702580 4755 generic.go:334] "Generic (PLEG): container finished" podID="25ab792f-0e9a-4892-9b7c-ea3b08d0301b" containerID="953040d74e51c0b4f1d3d1dea19c8de2b391205382ea5848a09edbbf26d6b3b3" exitCode=0 Feb 24 10:14:45 crc kubenswrapper[4755]: I0224 10:14:45.702658 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" event={"ID":"25ab792f-0e9a-4892-9b7c-ea3b08d0301b","Type":"ContainerDied","Data":"953040d74e51c0b4f1d3d1dea19c8de2b391205382ea5848a09edbbf26d6b3b3"} Feb 24 10:14:46 crc kubenswrapper[4755]: I0224 10:14:46.698845 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-dsqgk" Feb 24 10:14:46 crc kubenswrapper[4755]: I0224 10:14:46.714379 4755 generic.go:334] "Generic (PLEG): container finished" podID="25ab792f-0e9a-4892-9b7c-ea3b08d0301b" containerID="94f1df546fb443d99b5602f0c5a87e619642f13557c7f40364a03871870b1bda" exitCode=0 Feb 24 10:14:46 crc kubenswrapper[4755]: I0224 10:14:46.714443 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" event={"ID":"25ab792f-0e9a-4892-9b7c-ea3b08d0301b","Type":"ContainerDied","Data":"94f1df546fb443d99b5602f0c5a87e619642f13557c7f40364a03871870b1bda"} Feb 24 10:14:47 crc kubenswrapper[4755]: I0224 10:14:47.316693 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bzq67" Feb 24 10:14:48 crc kubenswrapper[4755]: I0224 10:14:48.002495 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" Feb 24 10:14:48 crc kubenswrapper[4755]: I0224 10:14:48.113406 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-util\") pod \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\" (UID: \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\") " Feb 24 10:14:48 crc kubenswrapper[4755]: I0224 10:14:48.113470 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-bundle\") pod \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\" (UID: \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\") " Feb 24 10:14:48 crc kubenswrapper[4755]: I0224 10:14:48.113547 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xtwh\" (UniqueName: \"kubernetes.io/projected/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-kube-api-access-9xtwh\") pod \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\" (UID: \"25ab792f-0e9a-4892-9b7c-ea3b08d0301b\") " Feb 24 10:14:48 crc kubenswrapper[4755]: I0224 10:14:48.115483 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-bundle" (OuterVolumeSpecName: "bundle") pod "25ab792f-0e9a-4892-9b7c-ea3b08d0301b" (UID: "25ab792f-0e9a-4892-9b7c-ea3b08d0301b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:14:48 crc kubenswrapper[4755]: I0224 10:14:48.121357 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-kube-api-access-9xtwh" (OuterVolumeSpecName: "kube-api-access-9xtwh") pod "25ab792f-0e9a-4892-9b7c-ea3b08d0301b" (UID: "25ab792f-0e9a-4892-9b7c-ea3b08d0301b"). InnerVolumeSpecName "kube-api-access-9xtwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:14:48 crc kubenswrapper[4755]: I0224 10:14:48.123901 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-util" (OuterVolumeSpecName: "util") pod "25ab792f-0e9a-4892-9b7c-ea3b08d0301b" (UID: "25ab792f-0e9a-4892-9b7c-ea3b08d0301b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:14:48 crc kubenswrapper[4755]: I0224 10:14:48.214958 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xtwh\" (UniqueName: \"kubernetes.io/projected/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-kube-api-access-9xtwh\") on node \"crc\" DevicePath \"\"" Feb 24 10:14:48 crc kubenswrapper[4755]: I0224 10:14:48.214992 4755 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-util\") on node \"crc\" DevicePath \"\"" Feb 24 10:14:48 crc kubenswrapper[4755]: I0224 10:14:48.215085 4755 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25ab792f-0e9a-4892-9b7c-ea3b08d0301b-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 10:14:48 crc kubenswrapper[4755]: I0224 10:14:48.734235 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" event={"ID":"25ab792f-0e9a-4892-9b7c-ea3b08d0301b","Type":"ContainerDied","Data":"5e18c4685201a52b3c8b23e4fcb1dc5864e05834255bc07e9c340811aec5f5f8"} Feb 24 10:14:48 crc kubenswrapper[4755]: I0224 10:14:48.734272 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e18c4685201a52b3c8b23e4fcb1dc5864e05834255bc07e9c340811aec5f5f8" Feb 24 10:14:48 crc kubenswrapper[4755]: I0224 10:14:48.734316 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5pl9dk" Feb 24 10:14:51 crc kubenswrapper[4755]: I0224 10:14:51.695349 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:14:51 crc kubenswrapper[4755]: I0224 10:14:51.695739 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.655183 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-544j4"] Feb 24 10:14:52 crc kubenswrapper[4755]: E0224 10:14:52.655611 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ab792f-0e9a-4892-9b7c-ea3b08d0301b" containerName="pull" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.655622 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ab792f-0e9a-4892-9b7c-ea3b08d0301b" containerName="pull" Feb 24 10:14:52 crc kubenswrapper[4755]: E0224 10:14:52.655640 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ab792f-0e9a-4892-9b7c-ea3b08d0301b" containerName="util" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.655646 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ab792f-0e9a-4892-9b7c-ea3b08d0301b" containerName="util" Feb 24 10:14:52 crc kubenswrapper[4755]: E0224 10:14:52.655659 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ab792f-0e9a-4892-9b7c-ea3b08d0301b" containerName="extract" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.655666 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ab792f-0e9a-4892-9b7c-ea3b08d0301b" containerName="extract" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.655770 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ab792f-0e9a-4892-9b7c-ea3b08d0301b" containerName="extract" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.656170 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-544j4" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.657929 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.658355 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.662392 4755 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-gbdql" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.668529 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-544j4"] Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.789505 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfchs\" (UniqueName: \"kubernetes.io/projected/eb1cd8c6-88ff-4113-88a2-2e4b61d57858-kube-api-access-qfchs\") pod \"cert-manager-operator-controller-manager-66c8bdd694-544j4\" (UID: \"eb1cd8c6-88ff-4113-88a2-2e4b61d57858\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-544j4" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.789566 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eb1cd8c6-88ff-4113-88a2-2e4b61d57858-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-544j4\" (UID: \"eb1cd8c6-88ff-4113-88a2-2e4b61d57858\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-544j4" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.890269 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfchs\" (UniqueName: \"kubernetes.io/projected/eb1cd8c6-88ff-4113-88a2-2e4b61d57858-kube-api-access-qfchs\") pod \"cert-manager-operator-controller-manager-66c8bdd694-544j4\" (UID: \"eb1cd8c6-88ff-4113-88a2-2e4b61d57858\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-544j4" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.890319 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eb1cd8c6-88ff-4113-88a2-2e4b61d57858-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-544j4\" (UID: \"eb1cd8c6-88ff-4113-88a2-2e4b61d57858\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-544j4" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.890809 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eb1cd8c6-88ff-4113-88a2-2e4b61d57858-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-544j4\" (UID: \"eb1cd8c6-88ff-4113-88a2-2e4b61d57858\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-544j4" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.931130 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfchs\" (UniqueName: \"kubernetes.io/projected/eb1cd8c6-88ff-4113-88a2-2e4b61d57858-kube-api-access-qfchs\") pod \"cert-manager-operator-controller-manager-66c8bdd694-544j4\" (UID: \"eb1cd8c6-88ff-4113-88a2-2e4b61d57858\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-544j4" Feb 24 10:14:52 crc kubenswrapper[4755]: I0224 10:14:52.971028 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-544j4" Feb 24 10:14:53 crc kubenswrapper[4755]: I0224 10:14:53.258933 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-544j4"] Feb 24 10:14:53 crc kubenswrapper[4755]: I0224 10:14:53.774906 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-544j4" event={"ID":"eb1cd8c6-88ff-4113-88a2-2e4b61d57858","Type":"ContainerStarted","Data":"5639204dd6740d21ca854f084afe6b1b33e2721eeb6084feed9dcc31c3d8a48a"} Feb 24 10:14:56 crc kubenswrapper[4755]: I0224 10:14:56.793454 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-544j4" event={"ID":"eb1cd8c6-88ff-4113-88a2-2e4b61d57858","Type":"ContainerStarted","Data":"0fa8a9b98690c179fc597c515dc028b7e03dca253c97ee7b5df0c15764e279ad"} Feb 24 10:14:56 crc kubenswrapper[4755]: I0224 10:14:56.829215 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-544j4" podStartSLOduration=2.35356853 podStartE2EDuration="4.829197176s" podCreationTimestamp="2026-02-24 10:14:52 +0000 UTC" firstStartedPulling="2026-02-24 10:14:53.270590188 +0000 UTC m=+1197.727112741" lastFinishedPulling="2026-02-24 10:14:55.746218834 +0000 UTC m=+1200.202741387" observedRunningTime="2026-02-24 10:14:56.825861441 +0000 UTC m=+1201.282383984" watchObservedRunningTime="2026-02-24 10:14:56.829197176 +0000 UTC m=+1201.285719719" Feb 24 10:14:58 crc kubenswrapper[4755]: I0224 10:14:58.538160 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-6fczr"] Feb 24 10:14:58 crc kubenswrapper[4755]: I0224 10:14:58.539003 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-6fczr" Feb 24 10:14:58 crc kubenswrapper[4755]: I0224 10:14:58.540606 4755 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-rtfbz" Feb 24 10:14:58 crc kubenswrapper[4755]: I0224 10:14:58.540732 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 24 10:14:58 crc kubenswrapper[4755]: I0224 10:14:58.540863 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 24 10:14:58 crc kubenswrapper[4755]: I0224 10:14:58.548768 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-6fczr"] Feb 24 10:14:58 crc kubenswrapper[4755]: I0224 10:14:58.665998 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glfts\" (UniqueName: \"kubernetes.io/projected/06454fce-ab5a-4fda-beec-3d530703ef98-kube-api-access-glfts\") pod \"cert-manager-webhook-6888856db4-6fczr\" (UID: \"06454fce-ab5a-4fda-beec-3d530703ef98\") " pod="cert-manager/cert-manager-webhook-6888856db4-6fczr" Feb 24 10:14:58 crc kubenswrapper[4755]: I0224 10:14:58.666261 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06454fce-ab5a-4fda-beec-3d530703ef98-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-6fczr\" (UID: \"06454fce-ab5a-4fda-beec-3d530703ef98\") " pod="cert-manager/cert-manager-webhook-6888856db4-6fczr" Feb 24 10:14:58 crc kubenswrapper[4755]: I0224 10:14:58.767347 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glfts\" (UniqueName: \"kubernetes.io/projected/06454fce-ab5a-4fda-beec-3d530703ef98-kube-api-access-glfts\") pod \"cert-manager-webhook-6888856db4-6fczr\" (UID: \"06454fce-ab5a-4fda-beec-3d530703ef98\") " pod="cert-manager/cert-manager-webhook-6888856db4-6fczr" Feb 24 10:14:58 crc kubenswrapper[4755]: I0224 10:14:58.767427 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06454fce-ab5a-4fda-beec-3d530703ef98-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-6fczr\" (UID: \"06454fce-ab5a-4fda-beec-3d530703ef98\") " pod="cert-manager/cert-manager-webhook-6888856db4-6fczr" Feb 24 10:14:58 crc kubenswrapper[4755]: I0224 10:14:58.794389 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06454fce-ab5a-4fda-beec-3d530703ef98-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-6fczr\" (UID: \"06454fce-ab5a-4fda-beec-3d530703ef98\") " pod="cert-manager/cert-manager-webhook-6888856db4-6fczr" Feb 24 10:14:58 crc kubenswrapper[4755]: I0224 10:14:58.794478 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glfts\" (UniqueName: \"kubernetes.io/projected/06454fce-ab5a-4fda-beec-3d530703ef98-kube-api-access-glfts\") pod \"cert-manager-webhook-6888856db4-6fczr\" (UID: \"06454fce-ab5a-4fda-beec-3d530703ef98\") " pod="cert-manager/cert-manager-webhook-6888856db4-6fczr" Feb 24 10:14:58 crc kubenswrapper[4755]: I0224 10:14:58.852021 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-6fczr" Feb 24 10:14:59 crc kubenswrapper[4755]: I0224 10:14:59.293866 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-6fczr"] Feb 24 10:14:59 crc kubenswrapper[4755]: W0224 10:14:59.304292 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06454fce_ab5a_4fda_beec_3d530703ef98.slice/crio-05eb9d55633db6a74f881027c11af272a0ff24fa01b0afaaf32a75d833abafaa WatchSource:0}: Error finding container 05eb9d55633db6a74f881027c11af272a0ff24fa01b0afaaf32a75d833abafaa: Status 404 returned error can't find the container with id 05eb9d55633db6a74f881027c11af272a0ff24fa01b0afaaf32a75d833abafaa Feb 24 10:14:59 crc kubenswrapper[4755]: I0224 10:14:59.813208 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-6fczr" event={"ID":"06454fce-ab5a-4fda-beec-3d530703ef98","Type":"ContainerStarted","Data":"05eb9d55633db6a74f881027c11af272a0ff24fa01b0afaaf32a75d833abafaa"} Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.150841 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg"] Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.151864 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.155340 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.158958 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.164449 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg"] Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.291469 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4790bf8d-1019-4057-9032-404b1bba999e-secret-volume\") pod \"collect-profiles-29532135-x2cwg\" (UID: \"4790bf8d-1019-4057-9032-404b1bba999e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.291540 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4790bf8d-1019-4057-9032-404b1bba999e-config-volume\") pod \"collect-profiles-29532135-x2cwg\" (UID: \"4790bf8d-1019-4057-9032-404b1bba999e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.291630 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ct5s\" (UniqueName: \"kubernetes.io/projected/4790bf8d-1019-4057-9032-404b1bba999e-kube-api-access-8ct5s\") pod \"collect-profiles-29532135-x2cwg\" (UID: \"4790bf8d-1019-4057-9032-404b1bba999e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.393546 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4790bf8d-1019-4057-9032-404b1bba999e-config-volume\") pod \"collect-profiles-29532135-x2cwg\" (UID: \"4790bf8d-1019-4057-9032-404b1bba999e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.394123 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ct5s\" (UniqueName: \"kubernetes.io/projected/4790bf8d-1019-4057-9032-404b1bba999e-kube-api-access-8ct5s\") pod \"collect-profiles-29532135-x2cwg\" (UID: \"4790bf8d-1019-4057-9032-404b1bba999e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.394279 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4790bf8d-1019-4057-9032-404b1bba999e-secret-volume\") pod \"collect-profiles-29532135-x2cwg\" (UID: \"4790bf8d-1019-4057-9032-404b1bba999e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.395783 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4790bf8d-1019-4057-9032-404b1bba999e-config-volume\") pod \"collect-profiles-29532135-x2cwg\" (UID: \"4790bf8d-1019-4057-9032-404b1bba999e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.411341 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4790bf8d-1019-4057-9032-404b1bba999e-secret-volume\") pod \"collect-profiles-29532135-x2cwg\" (UID: \"4790bf8d-1019-4057-9032-404b1bba999e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.424495 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ct5s\" (UniqueName: \"kubernetes.io/projected/4790bf8d-1019-4057-9032-404b1bba999e-kube-api-access-8ct5s\") pod \"collect-profiles-29532135-x2cwg\" (UID: \"4790bf8d-1019-4057-9032-404b1bba999e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.475627 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" Feb 24 10:15:00 crc kubenswrapper[4755]: I0224 10:15:00.969862 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg"] Feb 24 10:15:00 crc kubenswrapper[4755]: W0224 10:15:00.983582 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4790bf8d_1019_4057_9032_404b1bba999e.slice/crio-e5ce08888503be7fd0937f11f16307e15a3615440ebf83cef7720474f8814241 WatchSource:0}: Error finding container e5ce08888503be7fd0937f11f16307e15a3615440ebf83cef7720474f8814241: Status 404 returned error can't find the container with id e5ce08888503be7fd0937f11f16307e15a3615440ebf83cef7720474f8814241 Feb 24 10:15:01 crc kubenswrapper[4755]: I0224 10:15:01.827123 4755 generic.go:334] "Generic (PLEG): container finished" podID="4790bf8d-1019-4057-9032-404b1bba999e" containerID="1c5ddb6474852ed38d20b64a66012553f39fc712c8b53cb28a21a514ef415b6f" exitCode=0 Feb 24 10:15:01 crc kubenswrapper[4755]: I0224 10:15:01.827450 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" event={"ID":"4790bf8d-1019-4057-9032-404b1bba999e","Type":"ContainerDied","Data":"1c5ddb6474852ed38d20b64a66012553f39fc712c8b53cb28a21a514ef415b6f"} Feb 24 10:15:01 crc kubenswrapper[4755]: I0224 10:15:01.827483 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" event={"ID":"4790bf8d-1019-4057-9032-404b1bba999e","Type":"ContainerStarted","Data":"e5ce08888503be7fd0937f11f16307e15a3615440ebf83cef7720474f8814241"} Feb 24 10:15:02 crc kubenswrapper[4755]: I0224 10:15:02.082873 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-skt2x"] Feb 24 10:15:02 crc kubenswrapper[4755]: I0224 10:15:02.083895 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-skt2x" Feb 24 10:15:02 crc kubenswrapper[4755]: I0224 10:15:02.086470 4755 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-6mb7k" Feb 24 10:15:02 crc kubenswrapper[4755]: I0224 10:15:02.091757 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-skt2x"] Feb 24 10:15:02 crc kubenswrapper[4755]: I0224 10:15:02.146514 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfszt\" (UniqueName: \"kubernetes.io/projected/5d9c4260-3168-4912-87d5-8ccb80a47518-kube-api-access-vfszt\") pod \"cert-manager-cainjector-5545bd876-skt2x\" (UID: \"5d9c4260-3168-4912-87d5-8ccb80a47518\") " pod="cert-manager/cert-manager-cainjector-5545bd876-skt2x" Feb 24 10:15:02 crc kubenswrapper[4755]: I0224 10:15:02.146650 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d9c4260-3168-4912-87d5-8ccb80a47518-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-skt2x\" (UID: \"5d9c4260-3168-4912-87d5-8ccb80a47518\") " pod="cert-manager/cert-manager-cainjector-5545bd876-skt2x" Feb 24 10:15:02 crc kubenswrapper[4755]: I0224 10:15:02.248101 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d9c4260-3168-4912-87d5-8ccb80a47518-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-skt2x\" (UID: \"5d9c4260-3168-4912-87d5-8ccb80a47518\") " pod="cert-manager/cert-manager-cainjector-5545bd876-skt2x" Feb 24 10:15:02 crc kubenswrapper[4755]: I0224 10:15:02.248381 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfszt\" (UniqueName: \"kubernetes.io/projected/5d9c4260-3168-4912-87d5-8ccb80a47518-kube-api-access-vfszt\") pod \"cert-manager-cainjector-5545bd876-skt2x\" (UID: \"5d9c4260-3168-4912-87d5-8ccb80a47518\") " pod="cert-manager/cert-manager-cainjector-5545bd876-skt2x" Feb 24 10:15:02 crc kubenswrapper[4755]: I0224 10:15:02.267907 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d9c4260-3168-4912-87d5-8ccb80a47518-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-skt2x\" (UID: \"5d9c4260-3168-4912-87d5-8ccb80a47518\") " pod="cert-manager/cert-manager-cainjector-5545bd876-skt2x" Feb 24 10:15:02 crc kubenswrapper[4755]: I0224 10:15:02.269382 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfszt\" (UniqueName: \"kubernetes.io/projected/5d9c4260-3168-4912-87d5-8ccb80a47518-kube-api-access-vfszt\") pod \"cert-manager-cainjector-5545bd876-skt2x\" (UID: \"5d9c4260-3168-4912-87d5-8ccb80a47518\") " pod="cert-manager/cert-manager-cainjector-5545bd876-skt2x" Feb 24 10:15:02 crc kubenswrapper[4755]: I0224 10:15:02.401543 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-skt2x" Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.617326 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.664786 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4790bf8d-1019-4057-9032-404b1bba999e-config-volume\") pod \"4790bf8d-1019-4057-9032-404b1bba999e\" (UID: \"4790bf8d-1019-4057-9032-404b1bba999e\") " Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.664983 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ct5s\" (UniqueName: \"kubernetes.io/projected/4790bf8d-1019-4057-9032-404b1bba999e-kube-api-access-8ct5s\") pod \"4790bf8d-1019-4057-9032-404b1bba999e\" (UID: \"4790bf8d-1019-4057-9032-404b1bba999e\") " Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.665052 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4790bf8d-1019-4057-9032-404b1bba999e-secret-volume\") pod \"4790bf8d-1019-4057-9032-404b1bba999e\" (UID: \"4790bf8d-1019-4057-9032-404b1bba999e\") " Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.665999 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4790bf8d-1019-4057-9032-404b1bba999e-config-volume" (OuterVolumeSpecName: "config-volume") pod "4790bf8d-1019-4057-9032-404b1bba999e" (UID: "4790bf8d-1019-4057-9032-404b1bba999e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.671402 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4790bf8d-1019-4057-9032-404b1bba999e-kube-api-access-8ct5s" (OuterVolumeSpecName: "kube-api-access-8ct5s") pod "4790bf8d-1019-4057-9032-404b1bba999e" (UID: "4790bf8d-1019-4057-9032-404b1bba999e"). InnerVolumeSpecName "kube-api-access-8ct5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.671418 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4790bf8d-1019-4057-9032-404b1bba999e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4790bf8d-1019-4057-9032-404b1bba999e" (UID: "4790bf8d-1019-4057-9032-404b1bba999e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.767152 4755 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4790bf8d-1019-4057-9032-404b1bba999e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.767181 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4790bf8d-1019-4057-9032-404b1bba999e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.767191 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ct5s\" (UniqueName: \"kubernetes.io/projected/4790bf8d-1019-4057-9032-404b1bba999e-kube-api-access-8ct5s\") on node \"crc\" DevicePath \"\"" Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.812008 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-skt2x"] Feb 24 10:15:03 crc kubenswrapper[4755]: W0224 10:15:03.826422 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d9c4260_3168_4912_87d5_8ccb80a47518.slice/crio-9cfc0bcc2454223373a8539dcd1dc4225eb00048739a2cf7e7ef5de3c8e2d0f0 WatchSource:0}: Error finding container 9cfc0bcc2454223373a8539dcd1dc4225eb00048739a2cf7e7ef5de3c8e2d0f0: Status 404 returned error can't find the container with id 9cfc0bcc2454223373a8539dcd1dc4225eb00048739a2cf7e7ef5de3c8e2d0f0 Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.849536 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-6fczr" event={"ID":"06454fce-ab5a-4fda-beec-3d530703ef98","Type":"ContainerStarted","Data":"c73261f8eef64a603577a79b789be4f11d4179ba83414a1c5031e7023e020144"} Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.849613 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-6fczr" Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.850907 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" event={"ID":"4790bf8d-1019-4057-9032-404b1bba999e","Type":"ContainerDied","Data":"e5ce08888503be7fd0937f11f16307e15a3615440ebf83cef7720474f8814241"} Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.850927 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532135-x2cwg" Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.850937 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5ce08888503be7fd0937f11f16307e15a3615440ebf83cef7720474f8814241" Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.852142 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-skt2x" event={"ID":"5d9c4260-3168-4912-87d5-8ccb80a47518","Type":"ContainerStarted","Data":"9cfc0bcc2454223373a8539dcd1dc4225eb00048739a2cf7e7ef5de3c8e2d0f0"} Feb 24 10:15:03 crc kubenswrapper[4755]: I0224 10:15:03.869278 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-6fczr" podStartSLOduration=1.5110809349999998 podStartE2EDuration="5.869236093s" podCreationTimestamp="2026-02-24 10:14:58 +0000 UTC" firstStartedPulling="2026-02-24 10:14:59.305960357 +0000 UTC m=+1203.762482900" lastFinishedPulling="2026-02-24 10:15:03.664115525 +0000 UTC m=+1208.120638058" observedRunningTime="2026-02-24 10:15:03.866989573 +0000 UTC m=+1208.323512116" watchObservedRunningTime="2026-02-24 10:15:03.869236093 +0000 UTC m=+1208.325758636" Feb 24 10:15:04 crc kubenswrapper[4755]: I0224 10:15:04.860534 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-skt2x" event={"ID":"5d9c4260-3168-4912-87d5-8ccb80a47518","Type":"ContainerStarted","Data":"468076ab92d43bdfb1f7676196a2bc6ba2bc91007f648e57f6929fc0e6f59174"} Feb 24 10:15:04 crc kubenswrapper[4755]: I0224 10:15:04.885869 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-skt2x" podStartSLOduration=2.8858497400000003 podStartE2EDuration="2.88584974s" podCreationTimestamp="2026-02-24 10:15:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:15:04.880946205 +0000 UTC m=+1209.337468748" watchObservedRunningTime="2026-02-24 10:15:04.88584974 +0000 UTC m=+1209.342372303" Feb 24 10:15:08 crc kubenswrapper[4755]: I0224 10:15:08.856853 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-6fczr" Feb 24 10:15:17 crc kubenswrapper[4755]: I0224 10:15:17.436514 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-5bqv8"] Feb 24 10:15:17 crc kubenswrapper[4755]: E0224 10:15:17.437759 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4790bf8d-1019-4057-9032-404b1bba999e" containerName="collect-profiles" Feb 24 10:15:17 crc kubenswrapper[4755]: I0224 10:15:17.437777 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4790bf8d-1019-4057-9032-404b1bba999e" containerName="collect-profiles" Feb 24 10:15:17 crc kubenswrapper[4755]: I0224 10:15:17.437953 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4790bf8d-1019-4057-9032-404b1bba999e" containerName="collect-profiles" Feb 24 10:15:17 crc kubenswrapper[4755]: I0224 10:15:17.438598 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-5bqv8" Feb 24 10:15:17 crc kubenswrapper[4755]: I0224 10:15:17.441821 4755 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-krptw" Feb 24 10:15:17 crc kubenswrapper[4755]: I0224 10:15:17.445296 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-5bqv8"] Feb 24 10:15:17 crc kubenswrapper[4755]: I0224 10:15:17.573968 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sq5d\" (UniqueName: \"kubernetes.io/projected/2931d838-4db8-4ab4-a4d9-ffdd06d00af9-kube-api-access-9sq5d\") pod \"cert-manager-545d4d4674-5bqv8\" (UID: \"2931d838-4db8-4ab4-a4d9-ffdd06d00af9\") " pod="cert-manager/cert-manager-545d4d4674-5bqv8" Feb 24 10:15:17 crc kubenswrapper[4755]: I0224 10:15:17.574171 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2931d838-4db8-4ab4-a4d9-ffdd06d00af9-bound-sa-token\") pod \"cert-manager-545d4d4674-5bqv8\" (UID: \"2931d838-4db8-4ab4-a4d9-ffdd06d00af9\") " pod="cert-manager/cert-manager-545d4d4674-5bqv8" Feb 24 10:15:17 crc kubenswrapper[4755]: I0224 10:15:17.676157 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sq5d\" (UniqueName: \"kubernetes.io/projected/2931d838-4db8-4ab4-a4d9-ffdd06d00af9-kube-api-access-9sq5d\") pod \"cert-manager-545d4d4674-5bqv8\" (UID: \"2931d838-4db8-4ab4-a4d9-ffdd06d00af9\") " pod="cert-manager/cert-manager-545d4d4674-5bqv8" Feb 24 10:15:17 crc kubenswrapper[4755]: I0224 10:15:17.676268 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2931d838-4db8-4ab4-a4d9-ffdd06d00af9-bound-sa-token\") pod \"cert-manager-545d4d4674-5bqv8\" (UID: \"2931d838-4db8-4ab4-a4d9-ffdd06d00af9\") " pod="cert-manager/cert-manager-545d4d4674-5bqv8" Feb 24 10:15:17 crc kubenswrapper[4755]: I0224 10:15:17.709307 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sq5d\" (UniqueName: \"kubernetes.io/projected/2931d838-4db8-4ab4-a4d9-ffdd06d00af9-kube-api-access-9sq5d\") pod \"cert-manager-545d4d4674-5bqv8\" (UID: \"2931d838-4db8-4ab4-a4d9-ffdd06d00af9\") " pod="cert-manager/cert-manager-545d4d4674-5bqv8" Feb 24 10:15:17 crc kubenswrapper[4755]: I0224 10:15:17.713150 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2931d838-4db8-4ab4-a4d9-ffdd06d00af9-bound-sa-token\") pod \"cert-manager-545d4d4674-5bqv8\" (UID: \"2931d838-4db8-4ab4-a4d9-ffdd06d00af9\") " pod="cert-manager/cert-manager-545d4d4674-5bqv8" Feb 24 10:15:17 crc kubenswrapper[4755]: I0224 10:15:17.774808 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-5bqv8" Feb 24 10:15:18 crc kubenswrapper[4755]: I0224 10:15:18.031022 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-5bqv8"] Feb 24 10:15:18 crc kubenswrapper[4755]: I0224 10:15:18.966723 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-5bqv8" event={"ID":"2931d838-4db8-4ab4-a4d9-ffdd06d00af9","Type":"ContainerStarted","Data":"028b67da56dda7c43c2ea35148d2600375aade05143e3bfd5885bd44bea555f1"} Feb 24 10:15:18 crc kubenswrapper[4755]: I0224 10:15:18.967255 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-5bqv8" event={"ID":"2931d838-4db8-4ab4-a4d9-ffdd06d00af9","Type":"ContainerStarted","Data":"f71a3aa8b589672ea47a05a4bcd9e8aee3620960a8fd151693736031713d8c94"} Feb 24 10:15:18 crc kubenswrapper[4755]: I0224 10:15:18.997226 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-5bqv8" podStartSLOduration=1.997204563 podStartE2EDuration="1.997204563s" podCreationTimestamp="2026-02-24 10:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:15:18.996226063 +0000 UTC m=+1223.452748646" watchObservedRunningTime="2026-02-24 10:15:18.997204563 +0000 UTC m=+1223.453727116" Feb 24 10:15:21 crc kubenswrapper[4755]: I0224 10:15:21.694883 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:15:21 crc kubenswrapper[4755]: I0224 10:15:21.695401 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:15:24 crc kubenswrapper[4755]: I0224 10:15:24.609529 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rkrdw"] Feb 24 10:15:24 crc kubenswrapper[4755]: I0224 10:15:24.611037 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rkrdw" Feb 24 10:15:24 crc kubenswrapper[4755]: I0224 10:15:24.621544 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rkrdw"] Feb 24 10:15:24 crc kubenswrapper[4755]: I0224 10:15:24.652724 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 24 10:15:24 crc kubenswrapper[4755]: I0224 10:15:24.652816 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-k2f2k" Feb 24 10:15:24 crc kubenswrapper[4755]: I0224 10:15:24.652877 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 24 10:15:24 crc kubenswrapper[4755]: I0224 10:15:24.785619 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq99j\" (UniqueName: \"kubernetes.io/projected/3956845d-afdf-4c4a-a351-7d5e636c2f90-kube-api-access-xq99j\") pod \"openstack-operator-index-rkrdw\" (UID: \"3956845d-afdf-4c4a-a351-7d5e636c2f90\") " pod="openstack-operators/openstack-operator-index-rkrdw" Feb 24 10:15:24 crc kubenswrapper[4755]: I0224 10:15:24.886709 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq99j\" (UniqueName: \"kubernetes.io/projected/3956845d-afdf-4c4a-a351-7d5e636c2f90-kube-api-access-xq99j\") pod \"openstack-operator-index-rkrdw\" (UID: \"3956845d-afdf-4c4a-a351-7d5e636c2f90\") " pod="openstack-operators/openstack-operator-index-rkrdw" Feb 24 10:15:24 crc kubenswrapper[4755]: I0224 10:15:24.914567 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq99j\" (UniqueName: \"kubernetes.io/projected/3956845d-afdf-4c4a-a351-7d5e636c2f90-kube-api-access-xq99j\") pod \"openstack-operator-index-rkrdw\" (UID: \"3956845d-afdf-4c4a-a351-7d5e636c2f90\") " pod="openstack-operators/openstack-operator-index-rkrdw" Feb 24 10:15:24 crc kubenswrapper[4755]: I0224 10:15:24.969231 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rkrdw" Feb 24 10:15:25 crc kubenswrapper[4755]: I0224 10:15:25.385323 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rkrdw"] Feb 24 10:15:25 crc kubenswrapper[4755]: W0224 10:15:25.396219 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3956845d_afdf_4c4a_a351_7d5e636c2f90.slice/crio-402aa4e5b52ff629bc5533617415953b6cd4168fc76f2b9d9bd426ce172c9697 WatchSource:0}: Error finding container 402aa4e5b52ff629bc5533617415953b6cd4168fc76f2b9d9bd426ce172c9697: Status 404 returned error can't find the container with id 402aa4e5b52ff629bc5533617415953b6cd4168fc76f2b9d9bd426ce172c9697 Feb 24 10:15:26 crc kubenswrapper[4755]: I0224 10:15:26.022425 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rkrdw" event={"ID":"3956845d-afdf-4c4a-a351-7d5e636c2f90","Type":"ContainerStarted","Data":"402aa4e5b52ff629bc5533617415953b6cd4168fc76f2b9d9bd426ce172c9697"} Feb 24 10:15:27 crc kubenswrapper[4755]: I0224 10:15:27.031451 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rkrdw" event={"ID":"3956845d-afdf-4c4a-a351-7d5e636c2f90","Type":"ContainerStarted","Data":"e2a2bd9e4eaf296882066e26ed0226846d89fa3178045199f081042a69312151"} Feb 24 10:15:27 crc kubenswrapper[4755]: I0224 10:15:27.054466 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rkrdw" podStartSLOduration=2.169533171 podStartE2EDuration="3.054429237s" podCreationTimestamp="2026-02-24 10:15:24 +0000 UTC" firstStartedPulling="2026-02-24 10:15:25.399492131 +0000 UTC m=+1229.856014714" lastFinishedPulling="2026-02-24 10:15:26.284388197 +0000 UTC m=+1230.740910780" observedRunningTime="2026-02-24 10:15:27.0516063 +0000 UTC m=+1231.508128883" watchObservedRunningTime="2026-02-24 10:15:27.054429237 +0000 UTC m=+1231.510951820" Feb 24 10:15:27 crc kubenswrapper[4755]: I0224 10:15:27.789335 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rkrdw"] Feb 24 10:15:28 crc kubenswrapper[4755]: I0224 10:15:28.405586 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-t2pnf"] Feb 24 10:15:28 crc kubenswrapper[4755]: I0224 10:15:28.406795 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t2pnf" Feb 24 10:15:28 crc kubenswrapper[4755]: I0224 10:15:28.418718 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-t2pnf"] Feb 24 10:15:28 crc kubenswrapper[4755]: I0224 10:15:28.541847 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l687k\" (UniqueName: \"kubernetes.io/projected/c1e98848-f510-4052-bb40-4f2951b9f4d8-kube-api-access-l687k\") pod \"openstack-operator-index-t2pnf\" (UID: \"c1e98848-f510-4052-bb40-4f2951b9f4d8\") " pod="openstack-operators/openstack-operator-index-t2pnf" Feb 24 10:15:28 crc kubenswrapper[4755]: I0224 10:15:28.644001 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l687k\" (UniqueName: \"kubernetes.io/projected/c1e98848-f510-4052-bb40-4f2951b9f4d8-kube-api-access-l687k\") pod \"openstack-operator-index-t2pnf\" (UID: \"c1e98848-f510-4052-bb40-4f2951b9f4d8\") " pod="openstack-operators/openstack-operator-index-t2pnf" Feb 24 10:15:28 crc kubenswrapper[4755]: I0224 10:15:28.676228 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l687k\" (UniqueName: \"kubernetes.io/projected/c1e98848-f510-4052-bb40-4f2951b9f4d8-kube-api-access-l687k\") pod \"openstack-operator-index-t2pnf\" (UID: \"c1e98848-f510-4052-bb40-4f2951b9f4d8\") " pod="openstack-operators/openstack-operator-index-t2pnf" Feb 24 10:15:28 crc kubenswrapper[4755]: I0224 10:15:28.740954 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t2pnf" Feb 24 10:15:29 crc kubenswrapper[4755]: I0224 10:15:29.045671 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-rkrdw" podUID="3956845d-afdf-4c4a-a351-7d5e636c2f90" containerName="registry-server" containerID="cri-o://e2a2bd9e4eaf296882066e26ed0226846d89fa3178045199f081042a69312151" gracePeriod=2 Feb 24 10:15:29 crc kubenswrapper[4755]: I0224 10:15:29.223508 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-t2pnf"] Feb 24 10:15:29 crc kubenswrapper[4755]: W0224 10:15:29.232916 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1e98848_f510_4052_bb40_4f2951b9f4d8.slice/crio-5a5365075d21f630d8a1cc2ad70abbed737b30c36dc8e68c05a4eb4eb7dd6289 WatchSource:0}: Error finding container 5a5365075d21f630d8a1cc2ad70abbed737b30c36dc8e68c05a4eb4eb7dd6289: Status 404 returned error can't find the container with id 5a5365075d21f630d8a1cc2ad70abbed737b30c36dc8e68c05a4eb4eb7dd6289 Feb 24 10:15:29 crc kubenswrapper[4755]: I0224 10:15:29.400938 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rkrdw" Feb 24 10:15:29 crc kubenswrapper[4755]: I0224 10:15:29.557149 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq99j\" (UniqueName: \"kubernetes.io/projected/3956845d-afdf-4c4a-a351-7d5e636c2f90-kube-api-access-xq99j\") pod \"3956845d-afdf-4c4a-a351-7d5e636c2f90\" (UID: \"3956845d-afdf-4c4a-a351-7d5e636c2f90\") " Feb 24 10:15:29 crc kubenswrapper[4755]: I0224 10:15:29.565319 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3956845d-afdf-4c4a-a351-7d5e636c2f90-kube-api-access-xq99j" (OuterVolumeSpecName: "kube-api-access-xq99j") pod "3956845d-afdf-4c4a-a351-7d5e636c2f90" (UID: "3956845d-afdf-4c4a-a351-7d5e636c2f90"). InnerVolumeSpecName "kube-api-access-xq99j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:15:29 crc kubenswrapper[4755]: I0224 10:15:29.659620 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq99j\" (UniqueName: \"kubernetes.io/projected/3956845d-afdf-4c4a-a351-7d5e636c2f90-kube-api-access-xq99j\") on node \"crc\" DevicePath \"\"" Feb 24 10:15:30 crc kubenswrapper[4755]: I0224 10:15:30.055403 4755 generic.go:334] "Generic (PLEG): container finished" podID="3956845d-afdf-4c4a-a351-7d5e636c2f90" containerID="e2a2bd9e4eaf296882066e26ed0226846d89fa3178045199f081042a69312151" exitCode=0 Feb 24 10:15:30 crc kubenswrapper[4755]: I0224 10:15:30.055477 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rkrdw" event={"ID":"3956845d-afdf-4c4a-a351-7d5e636c2f90","Type":"ContainerDied","Data":"e2a2bd9e4eaf296882066e26ed0226846d89fa3178045199f081042a69312151"} Feb 24 10:15:30 crc kubenswrapper[4755]: I0224 10:15:30.055505 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rkrdw" event={"ID":"3956845d-afdf-4c4a-a351-7d5e636c2f90","Type":"ContainerDied","Data":"402aa4e5b52ff629bc5533617415953b6cd4168fc76f2b9d9bd426ce172c9697"} Feb 24 10:15:30 crc kubenswrapper[4755]: I0224 10:15:30.055526 4755 scope.go:117] "RemoveContainer" containerID="e2a2bd9e4eaf296882066e26ed0226846d89fa3178045199f081042a69312151" Feb 24 10:15:30 crc kubenswrapper[4755]: I0224 10:15:30.056120 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rkrdw" Feb 24 10:15:30 crc kubenswrapper[4755]: I0224 10:15:30.060050 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t2pnf" event={"ID":"c1e98848-f510-4052-bb40-4f2951b9f4d8","Type":"ContainerStarted","Data":"692dbe71a58b8957c8cdb6443f40814888efd07e6e12c8828f0acfb152d8dde4"} Feb 24 10:15:30 crc kubenswrapper[4755]: I0224 10:15:30.060160 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t2pnf" event={"ID":"c1e98848-f510-4052-bb40-4f2951b9f4d8","Type":"ContainerStarted","Data":"5a5365075d21f630d8a1cc2ad70abbed737b30c36dc8e68c05a4eb4eb7dd6289"} Feb 24 10:15:30 crc kubenswrapper[4755]: I0224 10:15:30.092544 4755 scope.go:117] "RemoveContainer" containerID="e2a2bd9e4eaf296882066e26ed0226846d89fa3178045199f081042a69312151" Feb 24 10:15:30 crc kubenswrapper[4755]: I0224 10:15:30.092957 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-t2pnf" podStartSLOduration=1.5464449409999999 podStartE2EDuration="2.092942606s" podCreationTimestamp="2026-02-24 10:15:28 +0000 UTC" firstStartedPulling="2026-02-24 10:15:29.238170487 +0000 UTC m=+1233.694693080" lastFinishedPulling="2026-02-24 10:15:29.784668162 +0000 UTC m=+1234.241190745" observedRunningTime="2026-02-24 10:15:30.087508468 +0000 UTC m=+1234.544031061" watchObservedRunningTime="2026-02-24 10:15:30.092942606 +0000 UTC m=+1234.549465159" Feb 24 10:15:30 crc kubenswrapper[4755]: E0224 10:15:30.094050 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2a2bd9e4eaf296882066e26ed0226846d89fa3178045199f081042a69312151\": container with ID starting with e2a2bd9e4eaf296882066e26ed0226846d89fa3178045199f081042a69312151 not found: ID does not exist" containerID="e2a2bd9e4eaf296882066e26ed0226846d89fa3178045199f081042a69312151" Feb 24 10:15:30 crc kubenswrapper[4755]: I0224 10:15:30.094155 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2a2bd9e4eaf296882066e26ed0226846d89fa3178045199f081042a69312151"} err="failed to get container status \"e2a2bd9e4eaf296882066e26ed0226846d89fa3178045199f081042a69312151\": rpc error: code = NotFound desc = could not find container \"e2a2bd9e4eaf296882066e26ed0226846d89fa3178045199f081042a69312151\": container with ID starting with e2a2bd9e4eaf296882066e26ed0226846d89fa3178045199f081042a69312151 not found: ID does not exist" Feb 24 10:15:30 crc kubenswrapper[4755]: I0224 10:15:30.107595 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rkrdw"] Feb 24 10:15:30 crc kubenswrapper[4755]: I0224 10:15:30.110976 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-rkrdw"] Feb 24 10:15:30 crc kubenswrapper[4755]: I0224 10:15:30.323726 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3956845d-afdf-4c4a-a351-7d5e636c2f90" path="/var/lib/kubelet/pods/3956845d-afdf-4c4a-a351-7d5e636c2f90/volumes" Feb 24 10:15:38 crc kubenswrapper[4755]: I0224 10:15:38.741568 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-t2pnf" Feb 24 10:15:38 crc kubenswrapper[4755]: I0224 10:15:38.742271 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-t2pnf" Feb 24 10:15:38 crc kubenswrapper[4755]: I0224 10:15:38.790382 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-t2pnf" Feb 24 10:15:39 crc kubenswrapper[4755]: I0224 10:15:39.169682 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-t2pnf" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.190746 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb"] Feb 24 10:15:46 crc kubenswrapper[4755]: E0224 10:15:46.191613 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3956845d-afdf-4c4a-a351-7d5e636c2f90" containerName="registry-server" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.191628 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="3956845d-afdf-4c4a-a351-7d5e636c2f90" containerName="registry-server" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.191778 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="3956845d-afdf-4c4a-a351-7d5e636c2f90" containerName="registry-server" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.192680 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.194538 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-2j9mf" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.203761 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb"] Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.218962 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wqck\" (UniqueName: \"kubernetes.io/projected/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-kube-api-access-9wqck\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb\" (UID: \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.219098 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-bundle\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb\" (UID: \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.219152 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-util\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb\" (UID: \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.319977 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wqck\" (UniqueName: \"kubernetes.io/projected/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-kube-api-access-9wqck\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb\" (UID: \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.320057 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-bundle\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb\" (UID: \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.320121 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-util\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb\" (UID: \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.320520 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-util\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb\" (UID: \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.320633 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-bundle\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb\" (UID: \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.344389 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wqck\" (UniqueName: \"kubernetes.io/projected/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-kube-api-access-9wqck\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb\" (UID: \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.526207 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.758588 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb"] Feb 24 10:15:46 crc kubenswrapper[4755]: I0224 10:15:46.822215 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51652: no serving certificate available for the kubelet" Feb 24 10:15:47 crc kubenswrapper[4755]: I0224 10:15:47.199305 4755 generic.go:334] "Generic (PLEG): container finished" podID="df15a53f-dce2-4c6a-a42c-06edd0ded8ea" containerID="0afc30b4ff4b4b22829d5d86ec1bf1b6829e440e75af1d866e9279cc5b844817" exitCode=0 Feb 24 10:15:47 crc kubenswrapper[4755]: I0224 10:15:47.199362 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" event={"ID":"df15a53f-dce2-4c6a-a42c-06edd0ded8ea","Type":"ContainerDied","Data":"0afc30b4ff4b4b22829d5d86ec1bf1b6829e440e75af1d866e9279cc5b844817"} Feb 24 10:15:47 crc kubenswrapper[4755]: I0224 10:15:47.199404 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" event={"ID":"df15a53f-dce2-4c6a-a42c-06edd0ded8ea","Type":"ContainerStarted","Data":"6e4b213bb1f8ce0fec6d35cce8171cf77ffa4787454247be609eecf2fc324ca2"} Feb 24 10:15:48 crc kubenswrapper[4755]: I0224 10:15:48.211346 4755 generic.go:334] "Generic (PLEG): container finished" podID="df15a53f-dce2-4c6a-a42c-06edd0ded8ea" containerID="8b0ca0816721b55570bb3f9853b9363a97efdc5618007851145ef9ea416059ea" exitCode=0 Feb 24 10:15:48 crc kubenswrapper[4755]: I0224 10:15:48.211491 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" event={"ID":"df15a53f-dce2-4c6a-a42c-06edd0ded8ea","Type":"ContainerDied","Data":"8b0ca0816721b55570bb3f9853b9363a97efdc5618007851145ef9ea416059ea"} Feb 24 10:15:49 crc kubenswrapper[4755]: I0224 10:15:49.225102 4755 generic.go:334] "Generic (PLEG): container finished" podID="df15a53f-dce2-4c6a-a42c-06edd0ded8ea" containerID="4c3e29f576d325406e3a7e3c2919ed5c6f2c06ec0d52df3a7a23b423121dac9f" exitCode=0 Feb 24 10:15:49 crc kubenswrapper[4755]: I0224 10:15:49.225178 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" event={"ID":"df15a53f-dce2-4c6a-a42c-06edd0ded8ea","Type":"ContainerDied","Data":"4c3e29f576d325406e3a7e3c2919ed5c6f2c06ec0d52df3a7a23b423121dac9f"} Feb 24 10:15:50 crc kubenswrapper[4755]: I0224 10:15:50.521630 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" Feb 24 10:15:50 crc kubenswrapper[4755]: I0224 10:15:50.594218 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-bundle\") pod \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\" (UID: \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\") " Feb 24 10:15:50 crc kubenswrapper[4755]: I0224 10:15:50.595283 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wqck\" (UniqueName: \"kubernetes.io/projected/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-kube-api-access-9wqck\") pod \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\" (UID: \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\") " Feb 24 10:15:50 crc kubenswrapper[4755]: I0224 10:15:50.595684 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-util\") pod \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\" (UID: \"df15a53f-dce2-4c6a-a42c-06edd0ded8ea\") " Feb 24 10:15:50 crc kubenswrapper[4755]: I0224 10:15:50.596431 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-bundle" (OuterVolumeSpecName: "bundle") pod "df15a53f-dce2-4c6a-a42c-06edd0ded8ea" (UID: "df15a53f-dce2-4c6a-a42c-06edd0ded8ea"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:15:50 crc kubenswrapper[4755]: I0224 10:15:50.596782 4755 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 10:15:50 crc kubenswrapper[4755]: I0224 10:15:50.607938 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-kube-api-access-9wqck" (OuterVolumeSpecName: "kube-api-access-9wqck") pod "df15a53f-dce2-4c6a-a42c-06edd0ded8ea" (UID: "df15a53f-dce2-4c6a-a42c-06edd0ded8ea"). InnerVolumeSpecName "kube-api-access-9wqck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:15:50 crc kubenswrapper[4755]: I0224 10:15:50.629977 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-util" (OuterVolumeSpecName: "util") pod "df15a53f-dce2-4c6a-a42c-06edd0ded8ea" (UID: "df15a53f-dce2-4c6a-a42c-06edd0ded8ea"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:15:50 crc kubenswrapper[4755]: I0224 10:15:50.697817 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wqck\" (UniqueName: \"kubernetes.io/projected/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-kube-api-access-9wqck\") on node \"crc\" DevicePath \"\"" Feb 24 10:15:50 crc kubenswrapper[4755]: I0224 10:15:50.697849 4755 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df15a53f-dce2-4c6a-a42c-06edd0ded8ea-util\") on node \"crc\" DevicePath \"\"" Feb 24 10:15:51 crc kubenswrapper[4755]: I0224 10:15:51.244868 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" event={"ID":"df15a53f-dce2-4c6a-a42c-06edd0ded8ea","Type":"ContainerDied","Data":"6e4b213bb1f8ce0fec6d35cce8171cf77ffa4787454247be609eecf2fc324ca2"} Feb 24 10:15:51 crc kubenswrapper[4755]: I0224 10:15:51.245344 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e4b213bb1f8ce0fec6d35cce8171cf77ffa4787454247be609eecf2fc324ca2" Feb 24 10:15:51 crc kubenswrapper[4755]: I0224 10:15:51.244947 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14bd9kb" Feb 24 10:15:51 crc kubenswrapper[4755]: I0224 10:15:51.695008 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:15:51 crc kubenswrapper[4755]: I0224 10:15:51.695106 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:15:51 crc kubenswrapper[4755]: I0224 10:15:51.695158 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 10:15:51 crc kubenswrapper[4755]: I0224 10:15:51.695796 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"805e6e8826e15b1db15a276c8f3343a64e680fde18436416c1dae4ce97e5fa1f"} pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 10:15:51 crc kubenswrapper[4755]: I0224 10:15:51.695867 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" containerID="cri-o://805e6e8826e15b1db15a276c8f3343a64e680fde18436416c1dae4ce97e5fa1f" gracePeriod=600 Feb 24 10:15:52 crc kubenswrapper[4755]: I0224 10:15:52.251948 4755 generic.go:334] "Generic (PLEG): container finished" podID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerID="805e6e8826e15b1db15a276c8f3343a64e680fde18436416c1dae4ce97e5fa1f" exitCode=0 Feb 24 10:15:52 crc kubenswrapper[4755]: I0224 10:15:52.251991 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerDied","Data":"805e6e8826e15b1db15a276c8f3343a64e680fde18436416c1dae4ce97e5fa1f"} Feb 24 10:15:52 crc kubenswrapper[4755]: I0224 10:15:52.252257 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"3bcc552811b027de294708fb8fbb284b52f08a7647c99aefef0df363ad7db3d3"} Feb 24 10:15:52 crc kubenswrapper[4755]: I0224 10:15:52.252274 4755 scope.go:117] "RemoveContainer" containerID="a16cb28625dc72750f4129b2fe696ae5e7f59a186c5c1bd832753d622e238c1f" Feb 24 10:15:53 crc kubenswrapper[4755]: I0224 10:15:53.523150 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-55c649df44-5jmz5"] Feb 24 10:15:53 crc kubenswrapper[4755]: E0224 10:15:53.523697 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df15a53f-dce2-4c6a-a42c-06edd0ded8ea" containerName="extract" Feb 24 10:15:53 crc kubenswrapper[4755]: I0224 10:15:53.523711 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="df15a53f-dce2-4c6a-a42c-06edd0ded8ea" containerName="extract" Feb 24 10:15:53 crc kubenswrapper[4755]: E0224 10:15:53.523723 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df15a53f-dce2-4c6a-a42c-06edd0ded8ea" containerName="util" Feb 24 10:15:53 crc kubenswrapper[4755]: I0224 10:15:53.523732 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="df15a53f-dce2-4c6a-a42c-06edd0ded8ea" containerName="util" Feb 24 10:15:53 crc kubenswrapper[4755]: E0224 10:15:53.523755 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df15a53f-dce2-4c6a-a42c-06edd0ded8ea" containerName="pull" Feb 24 10:15:53 crc kubenswrapper[4755]: I0224 10:15:53.523765 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="df15a53f-dce2-4c6a-a42c-06edd0ded8ea" containerName="pull" Feb 24 10:15:53 crc kubenswrapper[4755]: I0224 10:15:53.523900 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="df15a53f-dce2-4c6a-a42c-06edd0ded8ea" containerName="extract" Feb 24 10:15:53 crc kubenswrapper[4755]: I0224 10:15:53.524437 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55c649df44-5jmz5" Feb 24 10:15:53 crc kubenswrapper[4755]: I0224 10:15:53.527672 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-k2qw5" Feb 24 10:15:53 crc kubenswrapper[4755]: I0224 10:15:53.533622 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m8lr\" (UniqueName: \"kubernetes.io/projected/787b64d5-1156-4a67-b0d8-5750834ad269-kube-api-access-9m8lr\") pod \"openstack-operator-controller-init-55c649df44-5jmz5\" (UID: \"787b64d5-1156-4a67-b0d8-5750834ad269\") " pod="openstack-operators/openstack-operator-controller-init-55c649df44-5jmz5" Feb 24 10:15:53 crc kubenswrapper[4755]: I0224 10:15:53.543247 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55c649df44-5jmz5"] Feb 24 10:15:53 crc kubenswrapper[4755]: I0224 10:15:53.634863 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m8lr\" (UniqueName: \"kubernetes.io/projected/787b64d5-1156-4a67-b0d8-5750834ad269-kube-api-access-9m8lr\") pod \"openstack-operator-controller-init-55c649df44-5jmz5\" (UID: \"787b64d5-1156-4a67-b0d8-5750834ad269\") " pod="openstack-operators/openstack-operator-controller-init-55c649df44-5jmz5" Feb 24 10:15:53 crc kubenswrapper[4755]: I0224 10:15:53.653658 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m8lr\" (UniqueName: \"kubernetes.io/projected/787b64d5-1156-4a67-b0d8-5750834ad269-kube-api-access-9m8lr\") pod \"openstack-operator-controller-init-55c649df44-5jmz5\" (UID: \"787b64d5-1156-4a67-b0d8-5750834ad269\") " pod="openstack-operators/openstack-operator-controller-init-55c649df44-5jmz5" Feb 24 10:15:53 crc kubenswrapper[4755]: I0224 10:15:53.848275 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55c649df44-5jmz5" Feb 24 10:15:54 crc kubenswrapper[4755]: I0224 10:15:54.105192 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55c649df44-5jmz5"] Feb 24 10:15:54 crc kubenswrapper[4755]: W0224 10:15:54.110349 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod787b64d5_1156_4a67_b0d8_5750834ad269.slice/crio-c5463699795cd691c06442eb049fcfea5ba877f785de1dcb2116af03afdbcc39 WatchSource:0}: Error finding container c5463699795cd691c06442eb049fcfea5ba877f785de1dcb2116af03afdbcc39: Status 404 returned error can't find the container with id c5463699795cd691c06442eb049fcfea5ba877f785de1dcb2116af03afdbcc39 Feb 24 10:15:54 crc kubenswrapper[4755]: I0224 10:15:54.266897 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55c649df44-5jmz5" event={"ID":"787b64d5-1156-4a67-b0d8-5750834ad269","Type":"ContainerStarted","Data":"c5463699795cd691c06442eb049fcfea5ba877f785de1dcb2116af03afdbcc39"} Feb 24 10:15:59 crc kubenswrapper[4755]: I0224 10:15:59.301163 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55c649df44-5jmz5" event={"ID":"787b64d5-1156-4a67-b0d8-5750834ad269","Type":"ContainerStarted","Data":"11c79e8c6006ced914b7ca6f947179819b42b8420d7192adc10dce253fc34bf3"} Feb 24 10:15:59 crc kubenswrapper[4755]: I0224 10:15:59.301645 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-55c649df44-5jmz5" Feb 24 10:15:59 crc kubenswrapper[4755]: I0224 10:15:59.351630 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-55c649df44-5jmz5" podStartSLOduration=2.064906917 podStartE2EDuration="6.35159799s" podCreationTimestamp="2026-02-24 10:15:53 +0000 UTC" firstStartedPulling="2026-02-24 10:15:54.112411979 +0000 UTC m=+1258.568934532" lastFinishedPulling="2026-02-24 10:15:58.399103062 +0000 UTC m=+1262.855625605" observedRunningTime="2026-02-24 10:15:59.340684763 +0000 UTC m=+1263.797207336" watchObservedRunningTime="2026-02-24 10:15:59.35159799 +0000 UTC m=+1263.808120573" Feb 24 10:16:03 crc kubenswrapper[4755]: I0224 10:16:03.854777 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-55c649df44-5jmz5" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.497319 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-sgw86"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.498809 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sgw86" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.500387 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-6n5xx" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.518354 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-sgw86"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.525099 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-8msw2"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.525812 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8msw2" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.530808 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-wsrzl" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.539141 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qhqzn"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.548000 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qhqzn" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.548212 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zlhh\" (UniqueName: \"kubernetes.io/projected/43d8c8a5-3e2b-4900-9ef6-0a154c5ea5db-kube-api-access-5zlhh\") pod \"barbican-operator-controller-manager-868647ff47-sgw86\" (UID: \"43d8c8a5-3e2b-4900-9ef6-0a154c5ea5db\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sgw86" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.548257 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf4qc\" (UniqueName: \"kubernetes.io/projected/f97b465d-d20f-41b1-812b-429dc053c5b5-kube-api-access-lf4qc\") pod \"designate-operator-controller-manager-6d8bf5c495-8msw2\" (UID: \"f97b465d-d20f-41b1-812b-429dc053c5b5\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8msw2" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.550475 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-hxn45" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.553197 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-8msw2"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.567997 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qhqzn"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.574355 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-wwds6"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.575346 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wwds6" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.578104 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-dcqvw" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.589171 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-k8vmv"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.590270 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-k8vmv" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.598406 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sxw6c" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.603189 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-thrx4"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.604205 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-thrx4" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.608203 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-wwds6"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.609365 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-hrmsn" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.615960 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-thrx4"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.626135 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-k8vmv"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.643186 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-7wxps"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.644650 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.647286 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.647470 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-csqm6" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.649827 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fprr\" (UniqueName: \"kubernetes.io/projected/65750c45-1d0e-4367-a870-4e5bd633675a-kube-api-access-8fprr\") pod \"horizon-operator-controller-manager-5b9b8895d5-thrx4\" (UID: \"65750c45-1d0e-4367-a870-4e5bd633675a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-thrx4" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.649878 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lx2f\" (UniqueName: \"kubernetes.io/projected/8411295e-5ae4-425e-b59d-396fb070aadc-kube-api-access-7lx2f\") pod \"heat-operator-controller-manager-69f49c598c-wwds6\" (UID: \"8411295e-5ae4-425e-b59d-396fb070aadc\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wwds6" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.649925 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zlhh\" (UniqueName: \"kubernetes.io/projected/43d8c8a5-3e2b-4900-9ef6-0a154c5ea5db-kube-api-access-5zlhh\") pod \"barbican-operator-controller-manager-868647ff47-sgw86\" (UID: \"43d8c8a5-3e2b-4900-9ef6-0a154c5ea5db\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sgw86" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.650094 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf4qc\" (UniqueName: \"kubernetes.io/projected/f97b465d-d20f-41b1-812b-429dc053c5b5-kube-api-access-lf4qc\") pod \"designate-operator-controller-manager-6d8bf5c495-8msw2\" (UID: \"f97b465d-d20f-41b1-812b-429dc053c5b5\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8msw2" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.650146 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f29j\" (UniqueName: \"kubernetes.io/projected/e33ad527-552b-4ab0-88fd-025b07ff29bb-kube-api-access-9f29j\") pod \"glance-operator-controller-manager-784b5bb6c5-k8vmv\" (UID: \"e33ad527-552b-4ab0-88fd-025b07ff29bb\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-k8vmv" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.650173 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gz95\" (UniqueName: \"kubernetes.io/projected/0ad2eb80-fcdd-4b44-a31c-fb7f4e0e8a99-kube-api-access-8gz95\") pod \"cinder-operator-controller-manager-55d77d7b5c-qhqzn\" (UID: \"0ad2eb80-fcdd-4b44-a31c-fb7f4e0e8a99\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qhqzn" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.661338 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-7wxps"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.664938 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5cksl"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.665738 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5cksl" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.669375 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qszmj" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.678511 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5cksl"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.695393 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf4qc\" (UniqueName: \"kubernetes.io/projected/f97b465d-d20f-41b1-812b-429dc053c5b5-kube-api-access-lf4qc\") pod \"designate-operator-controller-manager-6d8bf5c495-8msw2\" (UID: \"f97b465d-d20f-41b1-812b-429dc053c5b5\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8msw2" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.711820 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zlhh\" (UniqueName: \"kubernetes.io/projected/43d8c8a5-3e2b-4900-9ef6-0a154c5ea5db-kube-api-access-5zlhh\") pod \"barbican-operator-controller-manager-868647ff47-sgw86\" (UID: \"43d8c8a5-3e2b-4900-9ef6-0a154c5ea5db\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sgw86" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.716136 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-flf8s"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.717175 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-flf8s" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.719926 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-lmdnj" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.730122 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-flf8s"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.751725 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fprr\" (UniqueName: \"kubernetes.io/projected/65750c45-1d0e-4367-a870-4e5bd633675a-kube-api-access-8fprr\") pod \"horizon-operator-controller-manager-5b9b8895d5-thrx4\" (UID: \"65750c45-1d0e-4367-a870-4e5bd633675a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-thrx4" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.751763 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lx2f\" (UniqueName: \"kubernetes.io/projected/8411295e-5ae4-425e-b59d-396fb070aadc-kube-api-access-7lx2f\") pod \"heat-operator-controller-manager-69f49c598c-wwds6\" (UID: \"8411295e-5ae4-425e-b59d-396fb070aadc\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wwds6" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.751791 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mchl2\" (UniqueName: \"kubernetes.io/projected/bbde731d-3f4f-4e29-bcb3-8ab17b12775b-kube-api-access-mchl2\") pod \"ironic-operator-controller-manager-554564d7fc-5cksl\" (UID: \"bbde731d-3f4f-4e29-bcb3-8ab17b12775b\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5cksl" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.751824 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert\") pod \"infra-operator-controller-manager-79d975b745-7wxps\" (UID: \"7aaa2f50-0589-4189-ad84-e71adb9b006b\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.751856 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f29j\" (UniqueName: \"kubernetes.io/projected/e33ad527-552b-4ab0-88fd-025b07ff29bb-kube-api-access-9f29j\") pod \"glance-operator-controller-manager-784b5bb6c5-k8vmv\" (UID: \"e33ad527-552b-4ab0-88fd-025b07ff29bb\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-k8vmv" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.751874 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gz95\" (UniqueName: \"kubernetes.io/projected/0ad2eb80-fcdd-4b44-a31c-fb7f4e0e8a99-kube-api-access-8gz95\") pod \"cinder-operator-controller-manager-55d77d7b5c-qhqzn\" (UID: \"0ad2eb80-fcdd-4b44-a31c-fb7f4e0e8a99\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qhqzn" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.751913 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zphjz\" (UniqueName: \"kubernetes.io/projected/7aaa2f50-0589-4189-ad84-e71adb9b006b-kube-api-access-zphjz\") pod \"infra-operator-controller-manager-79d975b745-7wxps\" (UID: \"7aaa2f50-0589-4189-ad84-e71adb9b006b\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.751937 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxfrm\" (UniqueName: \"kubernetes.io/projected/d53f75d1-78a2-4293-9517-c686deb7add1-kube-api-access-qxfrm\") pod \"keystone-operator-controller-manager-b4d948c87-flf8s\" (UID: \"d53f75d1-78a2-4293-9517-c686deb7add1\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-flf8s" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.759362 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-6twn5"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.785438 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6twn5" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.807568 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-cnb8z" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.810187 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-6s452"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.811993 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6s452" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.814856 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gz95\" (UniqueName: \"kubernetes.io/projected/0ad2eb80-fcdd-4b44-a31c-fb7f4e0e8a99-kube-api-access-8gz95\") pod \"cinder-operator-controller-manager-55d77d7b5c-qhqzn\" (UID: \"0ad2eb80-fcdd-4b44-a31c-fb7f4e0e8a99\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qhqzn" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.815934 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f29j\" (UniqueName: \"kubernetes.io/projected/e33ad527-552b-4ab0-88fd-025b07ff29bb-kube-api-access-9f29j\") pod \"glance-operator-controller-manager-784b5bb6c5-k8vmv\" (UID: \"e33ad527-552b-4ab0-88fd-025b07ff29bb\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-k8vmv" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.828620 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sgw86" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.859984 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-p225h" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.861225 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lx2f\" (UniqueName: \"kubernetes.io/projected/8411295e-5ae4-425e-b59d-396fb070aadc-kube-api-access-7lx2f\") pod \"heat-operator-controller-manager-69f49c598c-wwds6\" (UID: \"8411295e-5ae4-425e-b59d-396fb070aadc\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wwds6" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.861320 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fprr\" (UniqueName: \"kubernetes.io/projected/65750c45-1d0e-4367-a870-4e5bd633675a-kube-api-access-8fprr\") pod \"horizon-operator-controller-manager-5b9b8895d5-thrx4\" (UID: \"65750c45-1d0e-4367-a870-4e5bd633675a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-thrx4" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.861776 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqc7m\" (UniqueName: \"kubernetes.io/projected/91f6d092-a52b-4a4e-8d70-4dcbc3970af3-kube-api-access-bqc7m\") pod \"manila-operator-controller-manager-67d996989d-6twn5\" (UID: \"91f6d092-a52b-4a4e-8d70-4dcbc3970af3\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-6twn5" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.861873 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zphjz\" (UniqueName: \"kubernetes.io/projected/7aaa2f50-0589-4189-ad84-e71adb9b006b-kube-api-access-zphjz\") pod \"infra-operator-controller-manager-79d975b745-7wxps\" (UID: \"7aaa2f50-0589-4189-ad84-e71adb9b006b\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.861920 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxfrm\" (UniqueName: \"kubernetes.io/projected/d53f75d1-78a2-4293-9517-c686deb7add1-kube-api-access-qxfrm\") pod \"keystone-operator-controller-manager-b4d948c87-flf8s\" (UID: \"d53f75d1-78a2-4293-9517-c686deb7add1\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-flf8s" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.861953 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69zbl\" (UniqueName: \"kubernetes.io/projected/33906b1e-c4af-403f-bae1-02d79f6e2fe7-kube-api-access-69zbl\") pod \"mariadb-operator-controller-manager-6994f66f48-6s452\" (UID: \"33906b1e-c4af-403f-bae1-02d79f6e2fe7\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6s452" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.861992 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mchl2\" (UniqueName: \"kubernetes.io/projected/bbde731d-3f4f-4e29-bcb3-8ab17b12775b-kube-api-access-mchl2\") pod \"ironic-operator-controller-manager-554564d7fc-5cksl\" (UID: \"bbde731d-3f4f-4e29-bcb3-8ab17b12775b\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5cksl" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.862024 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert\") pod \"infra-operator-controller-manager-79d975b745-7wxps\" (UID: \"7aaa2f50-0589-4189-ad84-e71adb9b006b\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:16:24 crc kubenswrapper[4755]: E0224 10:16:24.862182 4755 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 10:16:24 crc kubenswrapper[4755]: E0224 10:16:24.862247 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert podName:7aaa2f50-0589-4189-ad84-e71adb9b006b nodeName:}" failed. No retries permitted until 2026-02-24 10:16:25.362227242 +0000 UTC m=+1289.818749785 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert") pod "infra-operator-controller-manager-79d975b745-7wxps" (UID: "7aaa2f50-0589-4189-ad84-e71adb9b006b") : secret "infra-operator-webhook-server-cert" not found Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.862346 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8msw2" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.874274 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-6twn5"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.880571 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qhqzn" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.880869 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.881736 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.883297 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-h6j8c" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.886797 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-6s452"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.890957 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.892100 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxfrm\" (UniqueName: \"kubernetes.io/projected/d53f75d1-78a2-4293-9517-c686deb7add1-kube-api-access-qxfrm\") pod \"keystone-operator-controller-manager-b4d948c87-flf8s\" (UID: \"d53f75d1-78a2-4293-9517-c686deb7add1\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-flf8s" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.892478 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mchl2\" (UniqueName: \"kubernetes.io/projected/bbde731d-3f4f-4e29-bcb3-8ab17b12775b-kube-api-access-mchl2\") pod \"ironic-operator-controller-manager-554564d7fc-5cksl\" (UID: \"bbde731d-3f4f-4e29-bcb3-8ab17b12775b\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5cksl" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.893740 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.895464 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-v2gnv" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.904320 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zphjz\" (UniqueName: \"kubernetes.io/projected/7aaa2f50-0589-4189-ad84-e71adb9b006b-kube-api-access-zphjz\") pod \"infra-operator-controller-manager-79d975b745-7wxps\" (UID: \"7aaa2f50-0589-4189-ad84-e71adb9b006b\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.914253 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wwds6" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.917113 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.929762 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-k8vmv" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.935148 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.935457 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-thrx4" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.942495 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-5gqdt"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.943437 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-5gqdt" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.945760 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7k5v5" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.949968 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.950582 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.952103 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-kzs9q" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.952366 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.961139 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-4mx48"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.961968 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-4mx48" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.963123 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbb47\" (UniqueName: \"kubernetes.io/projected/f4b3d83b-2c63-461c-ba0b-0178db6a5cd9-kube-api-access-xbb47\") pod \"octavia-operator-controller-manager-659dc6bbfc-5gqdt\" (UID: \"f4b3d83b-2c63-461c-ba0b-0178db6a5cd9\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-5gqdt" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.963160 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69zbl\" (UniqueName: \"kubernetes.io/projected/33906b1e-c4af-403f-bae1-02d79f6e2fe7-kube-api-access-69zbl\") pod \"mariadb-operator-controller-manager-6994f66f48-6s452\" (UID: \"33906b1e-c4af-403f-bae1-02d79f6e2fe7\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6s452" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.963181 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2djq\" (UniqueName: \"kubernetes.io/projected/0d3fc12b-b617-4f69-83a2-22b95beab375-kube-api-access-w2djq\") pod \"ovn-operator-controller-manager-5955d8c787-4mx48\" (UID: \"0d3fc12b-b617-4f69-83a2-22b95beab375\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-4mx48" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.963215 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-lvm9x" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.963219 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfpsc\" (UniqueName: \"kubernetes.io/projected/0f78d42d-4d7c-4a05-bf20-74701eebe735-kube-api-access-hfpsc\") pod \"nova-operator-controller-manager-567668f5cf-q5vj5\" (UID: \"0f78d42d-4d7c-4a05-bf20-74701eebe735\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.963271 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b926sww\" (UID: \"08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.963301 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swpmk\" (UniqueName: \"kubernetes.io/projected/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-kube-api-access-swpmk\") pod \"openstack-baremetal-operator-controller-manager-579b7786b926sww\" (UID: \"08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.963351 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqc7m\" (UniqueName: \"kubernetes.io/projected/91f6d092-a52b-4a4e-8d70-4dcbc3970af3-kube-api-access-bqc7m\") pod \"manila-operator-controller-manager-67d996989d-6twn5\" (UID: \"91f6d092-a52b-4a4e-8d70-4dcbc3970af3\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-6twn5" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.963402 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42mtt\" (UniqueName: \"kubernetes.io/projected/68bfc267-559f-46f5-b1a3-9cdc95813801-kube-api-access-42mtt\") pod \"neutron-operator-controller-manager-6bd4687957-9sgpm\" (UID: \"68bfc267-559f-46f5-b1a3-9cdc95813801\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.965875 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-4mx48"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.980216 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-5gqdt"] Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.983599 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69zbl\" (UniqueName: \"kubernetes.io/projected/33906b1e-c4af-403f-bae1-02d79f6e2fe7-kube-api-access-69zbl\") pod \"mariadb-operator-controller-manager-6994f66f48-6s452\" (UID: \"33906b1e-c4af-403f-bae1-02d79f6e2fe7\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6s452" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.983948 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqc7m\" (UniqueName: \"kubernetes.io/projected/91f6d092-a52b-4a4e-8d70-4dcbc3970af3-kube-api-access-bqc7m\") pod \"manila-operator-controller-manager-67d996989d-6twn5\" (UID: \"91f6d092-a52b-4a4e-8d70-4dcbc3970af3\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-6twn5" Feb 24 10:16:24 crc kubenswrapper[4755]: I0224 10:16:24.994499 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.006807 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-dvfck"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.008193 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-dvfck" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.013827 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-rc7b9" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.014475 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-jc7sx"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.015319 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jc7sx" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.015811 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5cksl" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.016389 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7rrq6" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.017250 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-dvfck"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.021955 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-jc7sx"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.035464 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-wpn7p"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.036352 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-wpn7p" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.038029 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-qkmmq" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.058689 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-wpn7p"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.060003 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-flf8s" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.064625 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swpmk\" (UniqueName: \"kubernetes.io/projected/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-kube-api-access-swpmk\") pod \"openstack-baremetal-operator-controller-manager-579b7786b926sww\" (UID: \"08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.064701 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42mtt\" (UniqueName: \"kubernetes.io/projected/68bfc267-559f-46f5-b1a3-9cdc95813801-kube-api-access-42mtt\") pod \"neutron-operator-controller-manager-6bd4687957-9sgpm\" (UID: \"68bfc267-559f-46f5-b1a3-9cdc95813801\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.064732 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbb47\" (UniqueName: \"kubernetes.io/projected/f4b3d83b-2c63-461c-ba0b-0178db6a5cd9-kube-api-access-xbb47\") pod \"octavia-operator-controller-manager-659dc6bbfc-5gqdt\" (UID: \"f4b3d83b-2c63-461c-ba0b-0178db6a5cd9\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-5gqdt" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.064750 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2djq\" (UniqueName: \"kubernetes.io/projected/0d3fc12b-b617-4f69-83a2-22b95beab375-kube-api-access-w2djq\") pod \"ovn-operator-controller-manager-5955d8c787-4mx48\" (UID: \"0d3fc12b-b617-4f69-83a2-22b95beab375\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-4mx48" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.064800 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfpsc\" (UniqueName: \"kubernetes.io/projected/0f78d42d-4d7c-4a05-bf20-74701eebe735-kube-api-access-hfpsc\") pod \"nova-operator-controller-manager-567668f5cf-q5vj5\" (UID: \"0f78d42d-4d7c-4a05-bf20-74701eebe735\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.064837 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b926sww\" (UID: \"08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:16:25 crc kubenswrapper[4755]: E0224 10:16:25.064942 4755 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 10:16:25 crc kubenswrapper[4755]: E0224 10:16:25.064989 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert podName:08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:25.564974496 +0000 UTC m=+1290.021497039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b926sww" (UID: "08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.101930 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2djq\" (UniqueName: \"kubernetes.io/projected/0d3fc12b-b617-4f69-83a2-22b95beab375-kube-api-access-w2djq\") pod \"ovn-operator-controller-manager-5955d8c787-4mx48\" (UID: \"0d3fc12b-b617-4f69-83a2-22b95beab375\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-4mx48" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.113086 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbb47\" (UniqueName: \"kubernetes.io/projected/f4b3d83b-2c63-461c-ba0b-0178db6a5cd9-kube-api-access-xbb47\") pod \"octavia-operator-controller-manager-659dc6bbfc-5gqdt\" (UID: \"f4b3d83b-2c63-461c-ba0b-0178db6a5cd9\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-5gqdt" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.117054 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfpsc\" (UniqueName: \"kubernetes.io/projected/0f78d42d-4d7c-4a05-bf20-74701eebe735-kube-api-access-hfpsc\") pod \"nova-operator-controller-manager-567668f5cf-q5vj5\" (UID: \"0f78d42d-4d7c-4a05-bf20-74701eebe735\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.117886 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swpmk\" (UniqueName: \"kubernetes.io/projected/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-kube-api-access-swpmk\") pod \"openstack-baremetal-operator-controller-manager-579b7786b926sww\" (UID: \"08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.118372 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42mtt\" (UniqueName: \"kubernetes.io/projected/68bfc267-559f-46f5-b1a3-9cdc95813801-kube-api-access-42mtt\") pod \"neutron-operator-controller-manager-6bd4687957-9sgpm\" (UID: \"68bfc267-559f-46f5-b1a3-9cdc95813801\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.118626 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.119822 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.122746 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cccdd" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.167682 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq8wq\" (UniqueName: \"kubernetes.io/projected/0e0b4255-1ae1-4d52-a64f-6872665d8e0a-kube-api-access-tq8wq\") pod \"test-operator-controller-manager-5dc6794d5b-26cq7\" (UID: \"0e0b4255-1ae1-4d52-a64f-6872665d8e0a\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.167717 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bl94\" (UniqueName: \"kubernetes.io/projected/498af486-ebe4-4542-ab62-11a4286729a8-kube-api-access-7bl94\") pod \"swift-operator-controller-manager-68f46476f-dvfck\" (UID: \"498af486-ebe4-4542-ab62-11a4286729a8\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-dvfck" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.167739 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cbj\" (UniqueName: \"kubernetes.io/projected/30e7b04e-9eb9-44a5-b062-fc4b470d5b0c-kube-api-access-z8cbj\") pod \"placement-operator-controller-manager-8497b45c89-jc7sx\" (UID: \"30e7b04e-9eb9-44a5-b062-fc4b470d5b0c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jc7sx" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.167803 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-847mc\" (UniqueName: \"kubernetes.io/projected/26bff0b1-5f96-48e7-9975-9e0bb9d24d9b-kube-api-access-847mc\") pod \"telemetry-operator-controller-manager-589c568786-wpn7p\" (UID: \"26bff0b1-5f96-48e7-9975-9e0bb9d24d9b\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-wpn7p" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.169149 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.191729 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6twn5" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.214164 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.217433 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.218038 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6s452" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.220867 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.246256 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qwf8p" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.257392 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.258127 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.258976 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.260832 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.264694 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.264764 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.264697 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-d8kl9" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.265486 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-5gqdt" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.269902 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-847mc\" (UniqueName: \"kubernetes.io/projected/26bff0b1-5f96-48e7-9975-9e0bb9d24d9b-kube-api-access-847mc\") pod \"telemetry-operator-controller-manager-589c568786-wpn7p\" (UID: \"26bff0b1-5f96-48e7-9975-9e0bb9d24d9b\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-wpn7p" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.270340 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq8wq\" (UniqueName: \"kubernetes.io/projected/0e0b4255-1ae1-4d52-a64f-6872665d8e0a-kube-api-access-tq8wq\") pod \"test-operator-controller-manager-5dc6794d5b-26cq7\" (UID: \"0e0b4255-1ae1-4d52-a64f-6872665d8e0a\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.270368 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bl94\" (UniqueName: \"kubernetes.io/projected/498af486-ebe4-4542-ab62-11a4286729a8-kube-api-access-7bl94\") pod \"swift-operator-controller-manager-68f46476f-dvfck\" (UID: \"498af486-ebe4-4542-ab62-11a4286729a8\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-dvfck" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.270403 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8cbj\" (UniqueName: \"kubernetes.io/projected/30e7b04e-9eb9-44a5-b062-fc4b470d5b0c-kube-api-access-z8cbj\") pod \"placement-operator-controller-manager-8497b45c89-jc7sx\" (UID: \"30e7b04e-9eb9-44a5-b062-fc4b470d5b0c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jc7sx" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.289487 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.291868 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-4mx48" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.373735 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq8wq\" (UniqueName: \"kubernetes.io/projected/0e0b4255-1ae1-4d52-a64f-6872665d8e0a-kube-api-access-tq8wq\") pod \"test-operator-controller-manager-5dc6794d5b-26cq7\" (UID: \"0e0b4255-1ae1-4d52-a64f-6872665d8e0a\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.373776 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-847mc\" (UniqueName: \"kubernetes.io/projected/26bff0b1-5f96-48e7-9975-9e0bb9d24d9b-kube-api-access-847mc\") pod \"telemetry-operator-controller-manager-589c568786-wpn7p\" (UID: \"26bff0b1-5f96-48e7-9975-9e0bb9d24d9b\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-wpn7p" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.374036 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert\") pod \"infra-operator-controller-manager-79d975b745-7wxps\" (UID: \"7aaa2f50-0589-4189-ad84-e71adb9b006b\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.374133 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.374158 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2jjg\" (UniqueName: \"kubernetes.io/projected/dac16af1-0ec7-4e50-af3b-6e0524257ef2-kube-api-access-v2jjg\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.374179 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.374196 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc4wq\" (UniqueName: \"kubernetes.io/projected/67020e69-36b1-40b1-b5e9-24ec55524f86-kube-api-access-gc4wq\") pod \"watcher-operator-controller-manager-bccc79885-c9nhl\" (UID: \"67020e69-36b1-40b1-b5e9-24ec55524f86\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl" Feb 24 10:16:25 crc kubenswrapper[4755]: E0224 10:16:25.374233 4755 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 10:16:25 crc kubenswrapper[4755]: E0224 10:16:25.374285 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert podName:7aaa2f50-0589-4189-ad84-e71adb9b006b nodeName:}" failed. No retries permitted until 2026-02-24 10:16:26.374265852 +0000 UTC m=+1290.830788395 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert") pod "infra-operator-controller-manager-79d975b745-7wxps" (UID: "7aaa2f50-0589-4189-ad84-e71adb9b006b") : secret "infra-operator-webhook-server-cert" not found Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.374320 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bl94\" (UniqueName: \"kubernetes.io/projected/498af486-ebe4-4542-ab62-11a4286729a8-kube-api-access-7bl94\") pod \"swift-operator-controller-manager-68f46476f-dvfck\" (UID: \"498af486-ebe4-4542-ab62-11a4286729a8\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-dvfck" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.374621 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8cbj\" (UniqueName: \"kubernetes.io/projected/30e7b04e-9eb9-44a5-b062-fc4b470d5b0c-kube-api-access-z8cbj\") pod \"placement-operator-controller-manager-8497b45c89-jc7sx\" (UID: \"30e7b04e-9eb9-44a5-b062-fc4b470d5b0c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jc7sx" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.475605 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj5kx"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.476533 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj5kx" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.485777 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.486758 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.486782 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2jjg\" (UniqueName: \"kubernetes.io/projected/dac16af1-0ec7-4e50-af3b-6e0524257ef2-kube-api-access-v2jjg\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.486806 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.486825 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc4wq\" (UniqueName: \"kubernetes.io/projected/67020e69-36b1-40b1-b5e9-24ec55524f86-kube-api-access-gc4wq\") pod \"watcher-operator-controller-manager-bccc79885-c9nhl\" (UID: \"67020e69-36b1-40b1-b5e9-24ec55524f86\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl" Feb 24 10:16:25 crc kubenswrapper[4755]: E0224 10:16:25.487495 4755 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 10:16:25 crc kubenswrapper[4755]: E0224 10:16:25.487536 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs podName:dac16af1-0ec7-4e50-af3b-6e0524257ef2 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:25.987522051 +0000 UTC m=+1290.444044594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-58h2c" (UID: "dac16af1-0ec7-4e50-af3b-6e0524257ef2") : secret "metrics-server-cert" not found Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.487689 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-nzt2z" Feb 24 10:16:25 crc kubenswrapper[4755]: E0224 10:16:25.487782 4755 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 10:16:25 crc kubenswrapper[4755]: E0224 10:16:25.487807 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs podName:dac16af1-0ec7-4e50-af3b-6e0524257ef2 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:25.987800119 +0000 UTC m=+1290.444322662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-58h2c" (UID: "dac16af1-0ec7-4e50-af3b-6e0524257ef2") : secret "webhook-server-cert" not found Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.491051 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj5kx"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.516123 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc4wq\" (UniqueName: \"kubernetes.io/projected/67020e69-36b1-40b1-b5e9-24ec55524f86-kube-api-access-gc4wq\") pod \"watcher-operator-controller-manager-bccc79885-c9nhl\" (UID: \"67020e69-36b1-40b1-b5e9-24ec55524f86\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.516759 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2jjg\" (UniqueName: \"kubernetes.io/projected/dac16af1-0ec7-4e50-af3b-6e0524257ef2-kube-api-access-v2jjg\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.536854 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-sgw86"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.557650 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.589822 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv728\" (UniqueName: \"kubernetes.io/projected/eef5bad6-267a-4408-8d8e-c01b11a4b0f0-kube-api-access-qv728\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nj5kx\" (UID: \"eef5bad6-267a-4408-8d8e-c01b11a4b0f0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj5kx" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.589879 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b926sww\" (UID: \"08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:16:25 crc kubenswrapper[4755]: E0224 10:16:25.590044 4755 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 10:16:25 crc kubenswrapper[4755]: E0224 10:16:25.590104 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert podName:08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:26.59009083 +0000 UTC m=+1291.046613373 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b926sww" (UID: "08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.603528 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-8msw2"] Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.637417 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-dvfck" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.649802 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jc7sx" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.662155 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-wpn7p" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.690812 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv728\" (UniqueName: \"kubernetes.io/projected/eef5bad6-267a-4408-8d8e-c01b11a4b0f0-kube-api-access-qv728\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nj5kx\" (UID: \"eef5bad6-267a-4408-8d8e-c01b11a4b0f0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj5kx" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.712312 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv728\" (UniqueName: \"kubernetes.io/projected/eef5bad6-267a-4408-8d8e-c01b11a4b0f0-kube-api-access-qv728\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nj5kx\" (UID: \"eef5bad6-267a-4408-8d8e-c01b11a4b0f0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj5kx" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.929350 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj5kx" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.997753 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:25 crc kubenswrapper[4755]: I0224 10:16:25.997801 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:25 crc kubenswrapper[4755]: E0224 10:16:25.997913 4755 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 10:16:25 crc kubenswrapper[4755]: E0224 10:16:25.997977 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs podName:dac16af1-0ec7-4e50-af3b-6e0524257ef2 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:26.997961682 +0000 UTC m=+1291.454484225 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-58h2c" (UID: "dac16af1-0ec7-4e50-af3b-6e0524257ef2") : secret "webhook-server-cert" not found Feb 24 10:16:25 crc kubenswrapper[4755]: E0224 10:16:25.998350 4755 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 10:16:25 crc kubenswrapper[4755]: E0224 10:16:25.998382 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs podName:dac16af1-0ec7-4e50-af3b-6e0524257ef2 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:26.998373854 +0000 UTC m=+1291.454896397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-58h2c" (UID: "dac16af1-0ec7-4e50-af3b-6e0524257ef2") : secret "metrics-server-cert" not found Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.111033 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qhqzn"] Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.114747 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-wwds6"] Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.156736 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-k8vmv"] Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.188844 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-flf8s"] Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.195619 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5cksl"] Feb 24 10:16:26 crc kubenswrapper[4755]: W0224 10:16:26.199390 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd53f75d1_78a2_4293_9517_c686deb7add1.slice/crio-16bc9a62bcbeb9dd559e2fd51886499cf7bd920a844ce308ac9fc935a2c7f4ed WatchSource:0}: Error finding container 16bc9a62bcbeb9dd559e2fd51886499cf7bd920a844ce308ac9fc935a2c7f4ed: Status 404 returned error can't find the container with id 16bc9a62bcbeb9dd559e2fd51886499cf7bd920a844ce308ac9fc935a2c7f4ed Feb 24 10:16:26 crc kubenswrapper[4755]: W0224 10:16:26.200658 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbde731d_3f4f_4e29_bcb3_8ab17b12775b.slice/crio-de0d8d75d560251c5b4e79feacad1bd0f580e1fc0360581c36f12710a3b9b3ba WatchSource:0}: Error finding container de0d8d75d560251c5b4e79feacad1bd0f580e1fc0360581c36f12710a3b9b3ba: Status 404 returned error can't find the container with id de0d8d75d560251c5b4e79feacad1bd0f580e1fc0360581c36f12710a3b9b3ba Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.304849 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-wpn7p"] Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.311608 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-jc7sx"] Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.333140 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-thrx4"] Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.333179 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-5gqdt"] Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.333206 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-6twn5"] Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.334625 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-6s452"] Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.404229 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert\") pod \"infra-operator-controller-manager-79d975b745-7wxps\" (UID: \"7aaa2f50-0589-4189-ad84-e71adb9b006b\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.404374 4755 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.404433 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert podName:7aaa2f50-0589-4189-ad84-e71adb9b006b nodeName:}" failed. No retries permitted until 2026-02-24 10:16:28.40441299 +0000 UTC m=+1292.860935523 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert") pod "infra-operator-controller-manager-79d975b745-7wxps" (UID: "7aaa2f50-0589-4189-ad84-e71adb9b006b") : secret "infra-operator-webhook-server-cert" not found Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.504014 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5"] Feb 24 10:16:26 crc kubenswrapper[4755]: W0224 10:16:26.512921 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f78d42d_4d7c_4a05_bf20_74701eebe735.slice/crio-e1bc4cee4e24e63388fe7ab76db0ea19e5b02eea00dba98baae0074e33ad725e WatchSource:0}: Error finding container e1bc4cee4e24e63388fe7ab76db0ea19e5b02eea00dba98baae0074e33ad725e: Status 404 returned error can't find the container with id e1bc4cee4e24e63388fe7ab76db0ea19e5b02eea00dba98baae0074e33ad725e Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.514008 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-4mx48"] Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.525498 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7"] Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.543699 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tq8wq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5dc6794d5b-26cq7_openstack-operators(0e0b4255-1ae1-4d52-a64f-6872665d8e0a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.544973 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7" podUID="0e0b4255-1ae1-4d52-a64f-6872665d8e0a" Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.547634 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hfpsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-q5vj5_openstack-operators(0f78d42d-4d7c-4a05-bf20-74701eebe735): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.548745 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5" podUID="0f78d42d-4d7c-4a05-bf20-74701eebe735" Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.552162 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl"] Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.556585 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7bl94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-dvfck_openstack-operators(498af486-ebe4-4542-ab62-11a4286729a8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.561459 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-dvfck" podUID="498af486-ebe4-4542-ab62-11a4286729a8" Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.566370 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-dvfck"] Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.566768 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-42mtt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6bd4687957-9sgpm_openstack-operators(68bfc267-559f-46f5-b1a3-9cdc95813801): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 10:16:26 crc kubenswrapper[4755]: W0224 10:16:26.566829 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67020e69_36b1_40b1_b5e9_24ec55524f86.slice/crio-2fe1b74cc76de8af28199465ac1dd49bb6ec1ebaa142b2a93dc7df5486d2cac3 WatchSource:0}: Error finding container 2fe1b74cc76de8af28199465ac1dd49bb6ec1ebaa142b2a93dc7df5486d2cac3: Status 404 returned error can't find the container with id 2fe1b74cc76de8af28199465ac1dd49bb6ec1ebaa142b2a93dc7df5486d2cac3 Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.568259 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm" podUID="68bfc267-559f-46f5-b1a3-9cdc95813801" Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.570732 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gc4wq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-c9nhl_openstack-operators(67020e69-36b1-40b1-b5e9-24ec55524f86): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.572116 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl" podUID="67020e69-36b1-40b1-b5e9-24ec55524f86" Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.572746 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-5gqdt" event={"ID":"f4b3d83b-2c63-461c-ba0b-0178db6a5cd9","Type":"ContainerStarted","Data":"59da921f1729c8b2ff352010a9179997c631595fe1f5ef9c945504ddddf31299"} Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.574299 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-4mx48" event={"ID":"0d3fc12b-b617-4f69-83a2-22b95beab375","Type":"ContainerStarted","Data":"2a73685524bad4e52e5c2bade328bb5da3f7050036671df30e4d10ab0474241d"} Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.574652 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm"] Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.581078 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wwds6" event={"ID":"8411295e-5ae4-425e-b59d-396fb070aadc","Type":"ContainerStarted","Data":"bec61ab2430412de6523c1947dd9e1ed5eb73b011f40132155d03fb40a819a0b"} Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.584024 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-thrx4" event={"ID":"65750c45-1d0e-4367-a870-4e5bd633675a","Type":"ContainerStarted","Data":"8d2a20d6ddae582f6745b476ebc3b1ca64923474e0cd2f6ca0b6de2b8592a526"} Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.589754 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6twn5" event={"ID":"91f6d092-a52b-4a4e-8d70-4dcbc3970af3","Type":"ContainerStarted","Data":"9bf3d0899baa1733c77e393021d980e06363b0da82f91631a8577beaafb9c08a"} Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.592798 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5" event={"ID":"0f78d42d-4d7c-4a05-bf20-74701eebe735","Type":"ContainerStarted","Data":"e1bc4cee4e24e63388fe7ab76db0ea19e5b02eea00dba98baae0074e33ad725e"} Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.594302 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5cksl" event={"ID":"bbde731d-3f4f-4e29-bcb3-8ab17b12775b","Type":"ContainerStarted","Data":"de0d8d75d560251c5b4e79feacad1bd0f580e1fc0360581c36f12710a3b9b3ba"} Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.595153 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5" podUID="0f78d42d-4d7c-4a05-bf20-74701eebe735" Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.595952 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-flf8s" event={"ID":"d53f75d1-78a2-4293-9517-c686deb7add1","Type":"ContainerStarted","Data":"16bc9a62bcbeb9dd559e2fd51886499cf7bd920a844ce308ac9fc935a2c7f4ed"} Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.598257 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jc7sx" event={"ID":"30e7b04e-9eb9-44a5-b062-fc4b470d5b0c","Type":"ContainerStarted","Data":"5b56455b8dd1ee9eff512ee78ab5ee3a2783da1892d402245b76b99f7a041291"} Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.602272 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8msw2" event={"ID":"f97b465d-d20f-41b1-812b-429dc053c5b5","Type":"ContainerStarted","Data":"5ede75f9a04b6dc4e52293a6d0967f69811f5bdac4c9421e0eb5a0aa05bf7656"} Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.608938 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b926sww\" (UID: \"08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.609059 4755 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.609117 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert podName:08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:28.609102454 +0000 UTC m=+1293.065624987 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b926sww" (UID: "08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.613346 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qhqzn" event={"ID":"0ad2eb80-fcdd-4b44-a31c-fb7f4e0e8a99","Type":"ContainerStarted","Data":"f28ea47784cf23c1d8ff07a9adc326b994de662645c7e3fa1e5286da507fa9aa"} Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.614318 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-wpn7p" event={"ID":"26bff0b1-5f96-48e7-9975-9e0bb9d24d9b","Type":"ContainerStarted","Data":"cc3fbff8168d2fe25e0a07656c5d2fbe9facbc8ef8bc17d1f5065c8291a3cb69"} Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.615038 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7" event={"ID":"0e0b4255-1ae1-4d52-a64f-6872665d8e0a","Type":"ContainerStarted","Data":"d7f7f9a6f65ef2184fb7825e46f23be984649201bac977aab1d37ac16ffd77d9"} Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.617244 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7" podUID="0e0b4255-1ae1-4d52-a64f-6872665d8e0a" Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.627601 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sgw86" event={"ID":"43d8c8a5-3e2b-4900-9ef6-0a154c5ea5db","Type":"ContainerStarted","Data":"2a7d7676b8cb27ddf7a4784ec3f0e0d6ced5179483ce41f5830de4fa4017cd9f"} Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.630112 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6s452" event={"ID":"33906b1e-c4af-403f-bae1-02d79f6e2fe7","Type":"ContainerStarted","Data":"33392523f657ca3b0e1055c2f6fd367a341286ff7f10358d4492033f0db579f5"} Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.632197 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-k8vmv" event={"ID":"e33ad527-552b-4ab0-88fd-025b07ff29bb","Type":"ContainerStarted","Data":"0685270aa9fe03d8fc0371b0094c9903c06c54f87eb40c6f8148defd755eef54"} Feb 24 10:16:26 crc kubenswrapper[4755]: I0224 10:16:26.702145 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj5kx"] Feb 24 10:16:26 crc kubenswrapper[4755]: W0224 10:16:26.702551 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeef5bad6_267a_4408_8d8e_c01b11a4b0f0.slice/crio-6a377e24b55d2473960f83265fbe2c29e10b0734112c1c822a238758acb46e1b WatchSource:0}: Error finding container 6a377e24b55d2473960f83265fbe2c29e10b0734112c1c822a238758acb46e1b: Status 404 returned error can't find the container with id 6a377e24b55d2473960f83265fbe2c29e10b0734112c1c822a238758acb46e1b Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.704962 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qv728,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-nj5kx_openstack-operators(eef5bad6-267a-4408-8d8e-c01b11a4b0f0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 10:16:26 crc kubenswrapper[4755]: E0224 10:16:26.706228 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj5kx" podUID="eef5bad6-267a-4408-8d8e-c01b11a4b0f0" Feb 24 10:16:27 crc kubenswrapper[4755]: I0224 10:16:27.014394 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:27 crc kubenswrapper[4755]: I0224 10:16:27.014458 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:27 crc kubenswrapper[4755]: E0224 10:16:27.014602 4755 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 10:16:27 crc kubenswrapper[4755]: E0224 10:16:27.014658 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs podName:dac16af1-0ec7-4e50-af3b-6e0524257ef2 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:29.014639943 +0000 UTC m=+1293.471162486 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-58h2c" (UID: "dac16af1-0ec7-4e50-af3b-6e0524257ef2") : secret "webhook-server-cert" not found Feb 24 10:16:27 crc kubenswrapper[4755]: E0224 10:16:27.014744 4755 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 10:16:27 crc kubenswrapper[4755]: E0224 10:16:27.014806 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs podName:dac16af1-0ec7-4e50-af3b-6e0524257ef2 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:29.014789528 +0000 UTC m=+1293.471312071 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-58h2c" (UID: "dac16af1-0ec7-4e50-af3b-6e0524257ef2") : secret "metrics-server-cert" not found Feb 24 10:16:27 crc kubenswrapper[4755]: I0224 10:16:27.644835 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj5kx" event={"ID":"eef5bad6-267a-4408-8d8e-c01b11a4b0f0","Type":"ContainerStarted","Data":"6a377e24b55d2473960f83265fbe2c29e10b0734112c1c822a238758acb46e1b"} Feb 24 10:16:27 crc kubenswrapper[4755]: E0224 10:16:27.646871 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj5kx" podUID="eef5bad6-267a-4408-8d8e-c01b11a4b0f0" Feb 24 10:16:27 crc kubenswrapper[4755]: I0224 10:16:27.649128 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-dvfck" event={"ID":"498af486-ebe4-4542-ab62-11a4286729a8","Type":"ContainerStarted","Data":"6b5e18c54244120d3110a9fa76a6a25a7307a2c2a3d2b297239ad4fcab49ca76"} Feb 24 10:16:27 crc kubenswrapper[4755]: E0224 10:16:27.650251 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-dvfck" podUID="498af486-ebe4-4542-ab62-11a4286729a8" Feb 24 10:16:27 crc kubenswrapper[4755]: I0224 10:16:27.663640 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl" event={"ID":"67020e69-36b1-40b1-b5e9-24ec55524f86","Type":"ContainerStarted","Data":"2fe1b74cc76de8af28199465ac1dd49bb6ec1ebaa142b2a93dc7df5486d2cac3"} Feb 24 10:16:27 crc kubenswrapper[4755]: E0224 10:16:27.678575 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl" podUID="67020e69-36b1-40b1-b5e9-24ec55524f86" Feb 24 10:16:27 crc kubenswrapper[4755]: I0224 10:16:27.685126 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm" event={"ID":"68bfc267-559f-46f5-b1a3-9cdc95813801","Type":"ContainerStarted","Data":"a7f47193c24714a2d90a9fba495cc9ece11e87f30f7fc01a0b6ea67167a63dd1"} Feb 24 10:16:27 crc kubenswrapper[4755]: E0224 10:16:27.686788 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7" podUID="0e0b4255-1ae1-4d52-a64f-6872665d8e0a" Feb 24 10:16:27 crc kubenswrapper[4755]: E0224 10:16:27.686852 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5" podUID="0f78d42d-4d7c-4a05-bf20-74701eebe735" Feb 24 10:16:27 crc kubenswrapper[4755]: E0224 10:16:27.686976 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm" podUID="68bfc267-559f-46f5-b1a3-9cdc95813801" Feb 24 10:16:28 crc kubenswrapper[4755]: I0224 10:16:28.457339 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert\") pod \"infra-operator-controller-manager-79d975b745-7wxps\" (UID: \"7aaa2f50-0589-4189-ad84-e71adb9b006b\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:16:28 crc kubenswrapper[4755]: E0224 10:16:28.457557 4755 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 10:16:28 crc kubenswrapper[4755]: E0224 10:16:28.457609 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert podName:7aaa2f50-0589-4189-ad84-e71adb9b006b nodeName:}" failed. No retries permitted until 2026-02-24 10:16:32.457593884 +0000 UTC m=+1296.914116427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert") pod "infra-operator-controller-manager-79d975b745-7wxps" (UID: "7aaa2f50-0589-4189-ad84-e71adb9b006b") : secret "infra-operator-webhook-server-cert" not found Feb 24 10:16:28 crc kubenswrapper[4755]: I0224 10:16:28.666349 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b926sww\" (UID: \"08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:16:28 crc kubenswrapper[4755]: E0224 10:16:28.666718 4755 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 10:16:28 crc kubenswrapper[4755]: E0224 10:16:28.666799 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert podName:08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:32.666776158 +0000 UTC m=+1297.123298701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b926sww" (UID: "08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 10:16:28 crc kubenswrapper[4755]: E0224 10:16:28.690427 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm" podUID="68bfc267-559f-46f5-b1a3-9cdc95813801" Feb 24 10:16:28 crc kubenswrapper[4755]: E0224 10:16:28.690755 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj5kx" podUID="eef5bad6-267a-4408-8d8e-c01b11a4b0f0" Feb 24 10:16:28 crc kubenswrapper[4755]: E0224 10:16:28.691462 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-dvfck" podUID="498af486-ebe4-4542-ab62-11a4286729a8" Feb 24 10:16:28 crc kubenswrapper[4755]: E0224 10:16:28.691897 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl" podUID="67020e69-36b1-40b1-b5e9-24ec55524f86" Feb 24 10:16:29 crc kubenswrapper[4755]: I0224 10:16:29.072158 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:29 crc kubenswrapper[4755]: I0224 10:16:29.072207 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:29 crc kubenswrapper[4755]: E0224 10:16:29.072378 4755 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 10:16:29 crc kubenswrapper[4755]: E0224 10:16:29.072422 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs podName:dac16af1-0ec7-4e50-af3b-6e0524257ef2 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:33.07241002 +0000 UTC m=+1297.528932563 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-58h2c" (UID: "dac16af1-0ec7-4e50-af3b-6e0524257ef2") : secret "webhook-server-cert" not found Feb 24 10:16:29 crc kubenswrapper[4755]: E0224 10:16:29.072781 4755 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 10:16:29 crc kubenswrapper[4755]: E0224 10:16:29.072868 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs podName:dac16af1-0ec7-4e50-af3b-6e0524257ef2 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:33.072850274 +0000 UTC m=+1297.529372817 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-58h2c" (UID: "dac16af1-0ec7-4e50-af3b-6e0524257ef2") : secret "metrics-server-cert" not found Feb 24 10:16:32 crc kubenswrapper[4755]: I0224 10:16:32.537109 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert\") pod \"infra-operator-controller-manager-79d975b745-7wxps\" (UID: \"7aaa2f50-0589-4189-ad84-e71adb9b006b\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:16:32 crc kubenswrapper[4755]: E0224 10:16:32.537368 4755 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 10:16:32 crc kubenswrapper[4755]: E0224 10:16:32.537616 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert podName:7aaa2f50-0589-4189-ad84-e71adb9b006b nodeName:}" failed. No retries permitted until 2026-02-24 10:16:40.537599801 +0000 UTC m=+1304.994122344 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert") pod "infra-operator-controller-manager-79d975b745-7wxps" (UID: "7aaa2f50-0589-4189-ad84-e71adb9b006b") : secret "infra-operator-webhook-server-cert" not found Feb 24 10:16:32 crc kubenswrapper[4755]: I0224 10:16:32.740605 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b926sww\" (UID: \"08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:16:32 crc kubenswrapper[4755]: E0224 10:16:32.740991 4755 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 10:16:32 crc kubenswrapper[4755]: E0224 10:16:32.741483 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert podName:08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:40.741462529 +0000 UTC m=+1305.197985082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b926sww" (UID: "08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 10:16:33 crc kubenswrapper[4755]: I0224 10:16:33.146056 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:33 crc kubenswrapper[4755]: I0224 10:16:33.146348 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:33 crc kubenswrapper[4755]: E0224 10:16:33.146234 4755 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 10:16:33 crc kubenswrapper[4755]: E0224 10:16:33.146441 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs podName:dac16af1-0ec7-4e50-af3b-6e0524257ef2 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:41.146424881 +0000 UTC m=+1305.602947424 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-58h2c" (UID: "dac16af1-0ec7-4e50-af3b-6e0524257ef2") : secret "metrics-server-cert" not found Feb 24 10:16:33 crc kubenswrapper[4755]: E0224 10:16:33.146504 4755 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 10:16:33 crc kubenswrapper[4755]: E0224 10:16:33.146611 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs podName:dac16af1-0ec7-4e50-af3b-6e0524257ef2 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:41.146588236 +0000 UTC m=+1305.603110839 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-58h2c" (UID: "dac16af1-0ec7-4e50-af3b-6e0524257ef2") : secret "webhook-server-cert" not found Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.775075 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8msw2" event={"ID":"f97b465d-d20f-41b1-812b-429dc053c5b5","Type":"ContainerStarted","Data":"3779b8022358e12495ad628c369820444876980b3625d12e0d01db484af23914"} Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.776416 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8msw2" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.781230 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-wpn7p" event={"ID":"26bff0b1-5f96-48e7-9975-9e0bb9d24d9b","Type":"ContainerStarted","Data":"0d3978ef020290e97ced14f712c4fe366e15651cd10886a4610b2ee0271f271b"} Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.781517 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-wpn7p" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.782786 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jc7sx" event={"ID":"30e7b04e-9eb9-44a5-b062-fc4b470d5b0c","Type":"ContainerStarted","Data":"36971ca018570cd2376308854a122065eba5842a7bfa7e57a7e565cba91c1233"} Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.783370 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jc7sx" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.788011 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-k8vmv" event={"ID":"e33ad527-552b-4ab0-88fd-025b07ff29bb","Type":"ContainerStarted","Data":"f814a09129f1169749f8e6b3a124a088eb4deb0ed2d769222b30a9a5b3337810"} Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.788155 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-k8vmv" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.797490 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-5gqdt" event={"ID":"f4b3d83b-2c63-461c-ba0b-0178db6a5cd9","Type":"ContainerStarted","Data":"23b4fad1480bfdb8de2048bafc76598268bdcff048964a55b338feb95cc35a0d"} Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.797642 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-5gqdt" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.798922 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wwds6" event={"ID":"8411295e-5ae4-425e-b59d-396fb070aadc","Type":"ContainerStarted","Data":"b7e6d3a068b14b82ab20b5ee5fe9b777297040f9be65ecb99be233efb4a0572b"} Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.799273 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wwds6" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.800315 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6twn5" event={"ID":"91f6d092-a52b-4a4e-8d70-4dcbc3970af3","Type":"ContainerStarted","Data":"55123a9c9ac84c37daa3b42f53f6f753f8a4ac96c14c3fb750bd6ff6c8910856"} Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.800640 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6twn5" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.801655 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5cksl" event={"ID":"bbde731d-3f4f-4e29-bcb3-8ab17b12775b","Type":"ContainerStarted","Data":"f98cae0133ba0097280b890d8f25c80dc428d36b15cb5a0077733289552ac596"} Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.802000 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5cksl" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.802997 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-4mx48" event={"ID":"0d3fc12b-b617-4f69-83a2-22b95beab375","Type":"ContainerStarted","Data":"02452075ea6818fe4728dbe256b33e3b4202081db24089250bc851c2945c3dc1"} Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.803321 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-4mx48" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.804241 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-thrx4" event={"ID":"65750c45-1d0e-4367-a870-4e5bd633675a","Type":"ContainerStarted","Data":"fb5b86f8a582dfdf0be6cbd6ad75a160c3bea10393306be3eba9ecc609dad167"} Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.804552 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-thrx4" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.805466 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sgw86" event={"ID":"43d8c8a5-3e2b-4900-9ef6-0a154c5ea5db","Type":"ContainerStarted","Data":"a9d670562572b1899b08e22a32687bffd644ffc205f152280e34df0086217267"} Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.805829 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sgw86" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.806838 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qhqzn" event={"ID":"0ad2eb80-fcdd-4b44-a31c-fb7f4e0e8a99","Type":"ContainerStarted","Data":"bf7d85fe764a20b7217f2b7da6fecf69baac299cabfa6e6dea4a6aad393fb6e0"} Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.807176 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qhqzn" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.808057 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-flf8s" event={"ID":"d53f75d1-78a2-4293-9517-c686deb7add1","Type":"ContainerStarted","Data":"a86a1e0895db64cdd9a493b17c4ef4a82dc4db67b941095c33a3b364e569da66"} Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.808378 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-flf8s" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.809285 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6s452" event={"ID":"33906b1e-c4af-403f-bae1-02d79f6e2fe7","Type":"ContainerStarted","Data":"c2a7a0db8f6fde8e6d6ff7f6c8121a6b3a08a75ce5355534cda7a7297004b429"} Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.809594 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6s452" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.915916 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8msw2" podStartSLOduration=2.494462051 podStartE2EDuration="14.915900216s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:25.627212026 +0000 UTC m=+1290.083734569" lastFinishedPulling="2026-02-24 10:16:38.048650191 +0000 UTC m=+1302.505172734" observedRunningTime="2026-02-24 10:16:38.843777207 +0000 UTC m=+1303.300299750" watchObservedRunningTime="2026-02-24 10:16:38.915900216 +0000 UTC m=+1303.372422759" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.919054 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5cksl" podStartSLOduration=3.0724665780000002 podStartE2EDuration="14.919045542s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.204246715 +0000 UTC m=+1290.660769258" lastFinishedPulling="2026-02-24 10:16:38.050825679 +0000 UTC m=+1302.507348222" observedRunningTime="2026-02-24 10:16:38.916613427 +0000 UTC m=+1303.373135970" watchObservedRunningTime="2026-02-24 10:16:38.919045542 +0000 UTC m=+1303.375568085" Feb 24 10:16:38 crc kubenswrapper[4755]: I0224 10:16:38.980085 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-wpn7p" podStartSLOduration=3.249592382 podStartE2EDuration="14.980052938s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.318503005 +0000 UTC m=+1290.775025548" lastFinishedPulling="2026-02-24 10:16:38.048963541 +0000 UTC m=+1302.505486104" observedRunningTime="2026-02-24 10:16:38.975462596 +0000 UTC m=+1303.431985139" watchObservedRunningTime="2026-02-24 10:16:38.980052938 +0000 UTC m=+1303.436575471" Feb 24 10:16:39 crc kubenswrapper[4755]: I0224 10:16:39.074017 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-k8vmv" podStartSLOduration=3.189468104 podStartE2EDuration="15.07399997s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.164770705 +0000 UTC m=+1290.621293248" lastFinishedPulling="2026-02-24 10:16:38.049302531 +0000 UTC m=+1302.505825114" observedRunningTime="2026-02-24 10:16:39.064737654 +0000 UTC m=+1303.521260197" watchObservedRunningTime="2026-02-24 10:16:39.07399997 +0000 UTC m=+1303.530522513" Feb 24 10:16:39 crc kubenswrapper[4755]: I0224 10:16:39.193365 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6twn5" podStartSLOduration=3.49779684 podStartE2EDuration="15.193350158s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.352050251 +0000 UTC m=+1290.808572794" lastFinishedPulling="2026-02-24 10:16:38.047603549 +0000 UTC m=+1302.504126112" observedRunningTime="2026-02-24 10:16:39.186677202 +0000 UTC m=+1303.643199745" watchObservedRunningTime="2026-02-24 10:16:39.193350158 +0000 UTC m=+1303.649872701" Feb 24 10:16:39 crc kubenswrapper[4755]: I0224 10:16:39.360475 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-thrx4" podStartSLOduration=3.663430308 podStartE2EDuration="15.360457751s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.35299059 +0000 UTC m=+1290.809513133" lastFinishedPulling="2026-02-24 10:16:38.050018033 +0000 UTC m=+1302.506540576" observedRunningTime="2026-02-24 10:16:39.355407395 +0000 UTC m=+1303.811929938" watchObservedRunningTime="2026-02-24 10:16:39.360457751 +0000 UTC m=+1303.816980294" Feb 24 10:16:39 crc kubenswrapper[4755]: I0224 10:16:39.360551 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-flf8s" podStartSLOduration=3.440994775 podStartE2EDuration="15.360548244s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.203723718 +0000 UTC m=+1290.660246261" lastFinishedPulling="2026-02-24 10:16:38.123277187 +0000 UTC m=+1302.579799730" observedRunningTime="2026-02-24 10:16:39.30283246 +0000 UTC m=+1303.759355003" watchObservedRunningTime="2026-02-24 10:16:39.360548244 +0000 UTC m=+1303.817070787" Feb 24 10:16:39 crc kubenswrapper[4755]: I0224 10:16:39.410896 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6s452" podStartSLOduration=3.71721424 podStartE2EDuration="15.410876199s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.351758463 +0000 UTC m=+1290.808281006" lastFinishedPulling="2026-02-24 10:16:38.045420422 +0000 UTC m=+1302.501942965" observedRunningTime="2026-02-24 10:16:39.406852094 +0000 UTC m=+1303.863374637" watchObservedRunningTime="2026-02-24 10:16:39.410876199 +0000 UTC m=+1303.867398742" Feb 24 10:16:39 crc kubenswrapper[4755]: I0224 10:16:39.429553 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qhqzn" podStartSLOduration=3.499059069 podStartE2EDuration="15.429532895s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.117508215 +0000 UTC m=+1290.574030758" lastFinishedPulling="2026-02-24 10:16:38.047982041 +0000 UTC m=+1302.504504584" observedRunningTime="2026-02-24 10:16:39.422327023 +0000 UTC m=+1303.878849566" watchObservedRunningTime="2026-02-24 10:16:39.429532895 +0000 UTC m=+1303.886055438" Feb 24 10:16:39 crc kubenswrapper[4755]: I0224 10:16:39.455693 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-4mx48" podStartSLOduration=3.910063208 podStartE2EDuration="15.455677623s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.542441093 +0000 UTC m=+1290.998963636" lastFinishedPulling="2026-02-24 10:16:38.088055508 +0000 UTC m=+1302.544578051" observedRunningTime="2026-02-24 10:16:39.448268413 +0000 UTC m=+1303.904790956" watchObservedRunningTime="2026-02-24 10:16:39.455677623 +0000 UTC m=+1303.912200166" Feb 24 10:16:39 crc kubenswrapper[4755]: I0224 10:16:39.486540 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jc7sx" podStartSLOduration=3.790705331 podStartE2EDuration="15.486520866s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.352729062 +0000 UTC m=+1290.809251595" lastFinishedPulling="2026-02-24 10:16:38.048544587 +0000 UTC m=+1302.505067130" observedRunningTime="2026-02-24 10:16:39.478203659 +0000 UTC m=+1303.934726202" watchObservedRunningTime="2026-02-24 10:16:39.486520866 +0000 UTC m=+1303.943043399" Feb 24 10:16:39 crc kubenswrapper[4755]: I0224 10:16:39.504741 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-5gqdt" podStartSLOduration=3.816282831 podStartE2EDuration="15.504725198s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.35232006 +0000 UTC m=+1290.808842603" lastFinishedPulling="2026-02-24 10:16:38.040762427 +0000 UTC m=+1302.497284970" observedRunningTime="2026-02-24 10:16:39.500204569 +0000 UTC m=+1303.956727112" watchObservedRunningTime="2026-02-24 10:16:39.504725198 +0000 UTC m=+1303.961247741" Feb 24 10:16:39 crc kubenswrapper[4755]: I0224 10:16:39.558136 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sgw86" podStartSLOduration=3.144511685 podStartE2EDuration="15.558119758s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:25.627209096 +0000 UTC m=+1290.083731639" lastFinishedPulling="2026-02-24 10:16:38.040817169 +0000 UTC m=+1302.497339712" observedRunningTime="2026-02-24 10:16:39.552664469 +0000 UTC m=+1304.009187012" watchObservedRunningTime="2026-02-24 10:16:39.558119758 +0000 UTC m=+1304.014642301" Feb 24 10:16:40 crc kubenswrapper[4755]: I0224 10:16:40.594116 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert\") pod \"infra-operator-controller-manager-79d975b745-7wxps\" (UID: \"7aaa2f50-0589-4189-ad84-e71adb9b006b\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:16:40 crc kubenswrapper[4755]: I0224 10:16:40.601500 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7aaa2f50-0589-4189-ad84-e71adb9b006b-cert\") pod \"infra-operator-controller-manager-79d975b745-7wxps\" (UID: \"7aaa2f50-0589-4189-ad84-e71adb9b006b\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:16:40 crc kubenswrapper[4755]: I0224 10:16:40.796774 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b926sww\" (UID: \"08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:16:40 crc kubenswrapper[4755]: E0224 10:16:40.797007 4755 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 10:16:40 crc kubenswrapper[4755]: E0224 10:16:40.797127 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert podName:08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:56.797104508 +0000 UTC m=+1321.253627061 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b926sww" (UID: "08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 10:16:40 crc kubenswrapper[4755]: I0224 10:16:40.880489 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:16:41 crc kubenswrapper[4755]: I0224 10:16:41.209111 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:41 crc kubenswrapper[4755]: I0224 10:16:41.209455 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:41 crc kubenswrapper[4755]: E0224 10:16:41.209281 4755 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 10:16:41 crc kubenswrapper[4755]: E0224 10:16:41.209555 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs podName:dac16af1-0ec7-4e50-af3b-6e0524257ef2 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:57.20953904 +0000 UTC m=+1321.666061583 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-58h2c" (UID: "dac16af1-0ec7-4e50-af3b-6e0524257ef2") : secret "metrics-server-cert" not found Feb 24 10:16:41 crc kubenswrapper[4755]: E0224 10:16:41.209586 4755 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 10:16:41 crc kubenswrapper[4755]: E0224 10:16:41.209614 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs podName:dac16af1-0ec7-4e50-af3b-6e0524257ef2 nodeName:}" failed. No retries permitted until 2026-02-24 10:16:57.209605982 +0000 UTC m=+1321.666128525 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-58h2c" (UID: "dac16af1-0ec7-4e50-af3b-6e0524257ef2") : secret "webhook-server-cert" not found Feb 24 10:16:41 crc kubenswrapper[4755]: I0224 10:16:41.360376 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wwds6" podStartSLOduration=5.43270554 podStartE2EDuration="17.360353249s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.122408436 +0000 UTC m=+1290.578930979" lastFinishedPulling="2026-02-24 10:16:38.050056145 +0000 UTC m=+1302.506578688" observedRunningTime="2026-02-24 10:16:39.595761081 +0000 UTC m=+1304.052283624" watchObservedRunningTime="2026-02-24 10:16:41.360353249 +0000 UTC m=+1305.816875832" Feb 24 10:16:41 crc kubenswrapper[4755]: I0224 10:16:41.361002 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-7wxps"] Feb 24 10:16:41 crc kubenswrapper[4755]: I0224 10:16:41.836731 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" event={"ID":"7aaa2f50-0589-4189-ad84-e71adb9b006b","Type":"ContainerStarted","Data":"85a739638014032d3f665cf87ab6319c74d599a4f8670ecd12759420bf40d7b5"} Feb 24 10:16:43 crc kubenswrapper[4755]: I0224 10:16:43.853630 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5" event={"ID":"0f78d42d-4d7c-4a05-bf20-74701eebe735","Type":"ContainerStarted","Data":"2b029cf2dd20c27d2d910a13b71acbac2e5313a5403682434314c06a3bd83b1d"} Feb 24 10:16:43 crc kubenswrapper[4755]: I0224 10:16:43.854045 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5" Feb 24 10:16:43 crc kubenswrapper[4755]: I0224 10:16:43.870421 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5" podStartSLOduration=3.624671551 podStartE2EDuration="19.870406711s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.547386987 +0000 UTC m=+1291.003909530" lastFinishedPulling="2026-02-24 10:16:42.793122137 +0000 UTC m=+1307.249644690" observedRunningTime="2026-02-24 10:16:43.867399098 +0000 UTC m=+1308.323921641" watchObservedRunningTime="2026-02-24 10:16:43.870406711 +0000 UTC m=+1308.326929254" Feb 24 10:16:44 crc kubenswrapper[4755]: I0224 10:16:44.833209 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sgw86" Feb 24 10:16:44 crc kubenswrapper[4755]: I0224 10:16:44.866045 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8msw2" Feb 24 10:16:44 crc kubenswrapper[4755]: I0224 10:16:44.884216 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qhqzn" Feb 24 10:16:44 crc kubenswrapper[4755]: I0224 10:16:44.917974 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wwds6" Feb 24 10:16:44 crc kubenswrapper[4755]: I0224 10:16:44.940317 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-k8vmv" Feb 24 10:16:44 crc kubenswrapper[4755]: I0224 10:16:44.949957 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-thrx4" Feb 24 10:16:45 crc kubenswrapper[4755]: I0224 10:16:45.018777 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5cksl" Feb 24 10:16:45 crc kubenswrapper[4755]: I0224 10:16:45.063141 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-flf8s" Feb 24 10:16:45 crc kubenswrapper[4755]: I0224 10:16:45.195244 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6twn5" Feb 24 10:16:45 crc kubenswrapper[4755]: I0224 10:16:45.223972 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6s452" Feb 24 10:16:45 crc kubenswrapper[4755]: I0224 10:16:45.269603 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-5gqdt" Feb 24 10:16:45 crc kubenswrapper[4755]: I0224 10:16:45.298433 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-4mx48" Feb 24 10:16:45 crc kubenswrapper[4755]: I0224 10:16:45.652493 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jc7sx" Feb 24 10:16:45 crc kubenswrapper[4755]: I0224 10:16:45.674475 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-wpn7p" Feb 24 10:16:53 crc kubenswrapper[4755]: I0224 10:16:53.931932 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" event={"ID":"7aaa2f50-0589-4189-ad84-e71adb9b006b","Type":"ContainerStarted","Data":"f9f90fdf79a4fea68904222af3dd939f67599482b86e5b0642c70613d1ffde20"} Feb 24 10:16:53 crc kubenswrapper[4755]: I0224 10:16:53.932294 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:16:53 crc kubenswrapper[4755]: I0224 10:16:53.933650 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7" event={"ID":"0e0b4255-1ae1-4d52-a64f-6872665d8e0a","Type":"ContainerStarted","Data":"43cdad262f15bee20335caa860b96372e6e051b4a4ee179f1424c1153cb44f7f"} Feb 24 10:16:53 crc kubenswrapper[4755]: I0224 10:16:53.934459 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7" Feb 24 10:16:53 crc kubenswrapper[4755]: I0224 10:16:53.935309 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl" event={"ID":"67020e69-36b1-40b1-b5e9-24ec55524f86","Type":"ContainerStarted","Data":"e6a1cb125c45bcc3740a5a703d0b004cc4789aeab6082ca103edf6daef246663"} Feb 24 10:16:53 crc kubenswrapper[4755]: I0224 10:16:53.935483 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl" Feb 24 10:16:53 crc kubenswrapper[4755]: I0224 10:16:53.936363 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm" event={"ID":"68bfc267-559f-46f5-b1a3-9cdc95813801","Type":"ContainerStarted","Data":"b7cdfb3d5d0097a2c1c4a09091b161be5e043b1bdce17db866449dc5e0faa543"} Feb 24 10:16:53 crc kubenswrapper[4755]: I0224 10:16:53.936502 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm" Feb 24 10:16:53 crc kubenswrapper[4755]: I0224 10:16:53.937821 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj5kx" event={"ID":"eef5bad6-267a-4408-8d8e-c01b11a4b0f0","Type":"ContainerStarted","Data":"78c1e57b0874aae5fb6cb2d7544373c06ca2db293d2ece805e5f0921451facb9"} Feb 24 10:16:53 crc kubenswrapper[4755]: I0224 10:16:53.938922 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-dvfck" event={"ID":"498af486-ebe4-4542-ab62-11a4286729a8","Type":"ContainerStarted","Data":"cfc50198eaa66adfd160fc862500a57f496b4d359b15ae9f7f95cb849c743c5e"} Feb 24 10:16:53 crc kubenswrapper[4755]: I0224 10:16:53.939114 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-dvfck" Feb 24 10:16:53 crc kubenswrapper[4755]: I0224 10:16:53.952169 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" podStartSLOduration=18.845112493 podStartE2EDuration="29.952151668s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:41.368888323 +0000 UTC m=+1305.825410866" lastFinishedPulling="2026-02-24 10:16:52.475927498 +0000 UTC m=+1316.932450041" observedRunningTime="2026-02-24 10:16:53.951924321 +0000 UTC m=+1318.408446874" watchObservedRunningTime="2026-02-24 10:16:53.952151668 +0000 UTC m=+1318.408674211" Feb 24 10:16:53 crc kubenswrapper[4755]: I0224 10:16:53.966085 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7" podStartSLOduration=3.493686784 podStartE2EDuration="29.966055217s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.54295832 +0000 UTC m=+1290.999480873" lastFinishedPulling="2026-02-24 10:16:53.015326713 +0000 UTC m=+1317.471849306" observedRunningTime="2026-02-24 10:16:53.962457726 +0000 UTC m=+1318.418980289" watchObservedRunningTime="2026-02-24 10:16:53.966055217 +0000 UTC m=+1318.422577760" Feb 24 10:16:53 crc kubenswrapper[4755]: I0224 10:16:53.974950 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-dvfck" podStartSLOduration=3.407361227 podStartE2EDuration="29.974931212s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.554751734 +0000 UTC m=+1291.011274277" lastFinishedPulling="2026-02-24 10:16:53.122321719 +0000 UTC m=+1317.578844262" observedRunningTime="2026-02-24 10:16:53.973249219 +0000 UTC m=+1318.429771762" watchObservedRunningTime="2026-02-24 10:16:53.974931212 +0000 UTC m=+1318.431453755" Feb 24 10:16:54 crc kubenswrapper[4755]: I0224 10:16:54.032704 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm" podStartSLOduration=3.582789897 podStartE2EDuration="30.032689916s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.566466696 +0000 UTC m=+1291.022989239" lastFinishedPulling="2026-02-24 10:16:53.016366695 +0000 UTC m=+1317.472889258" observedRunningTime="2026-02-24 10:16:54.007403065 +0000 UTC m=+1318.463925608" watchObservedRunningTime="2026-02-24 10:16:54.032689916 +0000 UTC m=+1318.489212459" Feb 24 10:16:54 crc kubenswrapper[4755]: I0224 10:16:54.033380 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl" podStartSLOduration=2.5317427390000002 podStartE2EDuration="29.033375387s" podCreationTimestamp="2026-02-24 10:16:25 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.570534271 +0000 UTC m=+1291.027056814" lastFinishedPulling="2026-02-24 10:16:53.072166879 +0000 UTC m=+1317.528689462" observedRunningTime="2026-02-24 10:16:54.03249627 +0000 UTC m=+1318.489018813" watchObservedRunningTime="2026-02-24 10:16:54.033375387 +0000 UTC m=+1318.489897930" Feb 24 10:16:54 crc kubenswrapper[4755]: I0224 10:16:54.044282 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj5kx" podStartSLOduration=2.677253265 podStartE2EDuration="29.044263764s" podCreationTimestamp="2026-02-24 10:16:25 +0000 UTC" firstStartedPulling="2026-02-24 10:16:26.704844961 +0000 UTC m=+1291.161367504" lastFinishedPulling="2026-02-24 10:16:53.07185545 +0000 UTC m=+1317.528378003" observedRunningTime="2026-02-24 10:16:54.042131128 +0000 UTC m=+1318.498653671" watchObservedRunningTime="2026-02-24 10:16:54.044263764 +0000 UTC m=+1318.500786307" Feb 24 10:16:55 crc kubenswrapper[4755]: I0224 10:16:55.260560 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5vj5" Feb 24 10:16:56 crc kubenswrapper[4755]: I0224 10:16:56.834420 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b926sww\" (UID: \"08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:16:56 crc kubenswrapper[4755]: I0224 10:16:56.844032 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b926sww\" (UID: \"08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.076011 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-kzs9q" Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.085137 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.240899 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.240952 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.249091 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.249179 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac16af1-0ec7-4e50-af3b-6e0524257ef2-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-58h2c\" (UID: \"dac16af1-0ec7-4e50-af3b-6e0524257ef2\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.389881 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-d8kl9" Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.398167 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.617569 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww"] Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.632202 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.638357 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c"] Feb 24 10:16:57 crc kubenswrapper[4755]: W0224 10:16:57.640464 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddac16af1_0ec7_4e50_af3b_6e0524257ef2.slice/crio-5ebb2785ed6f092ce29636b92c0be74a0b51b025a6326a5e6b9147a414381aa4 WatchSource:0}: Error finding container 5ebb2785ed6f092ce29636b92c0be74a0b51b025a6326a5e6b9147a414381aa4: Status 404 returned error can't find the container with id 5ebb2785ed6f092ce29636b92c0be74a0b51b025a6326a5e6b9147a414381aa4 Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.968920 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" event={"ID":"08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64","Type":"ContainerStarted","Data":"aa10451719ecd98cfcc49c684b449b70d2b1aff39b0a6c684c51f65c6ff80ca3"} Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.970339 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" event={"ID":"dac16af1-0ec7-4e50-af3b-6e0524257ef2","Type":"ContainerStarted","Data":"7b401d033724749c754de148ac766129c3d94509e7e377491d1882babac05868"} Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.970358 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" event={"ID":"dac16af1-0ec7-4e50-af3b-6e0524257ef2","Type":"ContainerStarted","Data":"5ebb2785ed6f092ce29636b92c0be74a0b51b025a6326a5e6b9147a414381aa4"} Feb 24 10:16:57 crc kubenswrapper[4755]: I0224 10:16:57.970551 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:16:58 crc kubenswrapper[4755]: I0224 10:16:58.000144 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" podStartSLOduration=33.000113364 podStartE2EDuration="33.000113364s" podCreationTimestamp="2026-02-24 10:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:16:57.991320992 +0000 UTC m=+1322.447843565" watchObservedRunningTime="2026-02-24 10:16:58.000113364 +0000 UTC m=+1322.456635967" Feb 24 10:17:00 crc kubenswrapper[4755]: I0224 10:17:00.891656 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-7wxps" Feb 24 10:17:01 crc kubenswrapper[4755]: I0224 10:17:01.002378 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" event={"ID":"08d53dac-d71b-43bc-b5f9-f8c7c1fc4e64","Type":"ContainerStarted","Data":"3e5000370a289c8e83cc1824e4d782a7c2cca62379d5f15567ba613305b44c5c"} Feb 24 10:17:01 crc kubenswrapper[4755]: I0224 10:17:01.003437 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:17:01 crc kubenswrapper[4755]: I0224 10:17:01.030196 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" podStartSLOduration=34.822603594 podStartE2EDuration="37.03017504s" podCreationTimestamp="2026-02-24 10:16:24 +0000 UTC" firstStartedPulling="2026-02-24 10:16:57.631968119 +0000 UTC m=+1322.088490662" lastFinishedPulling="2026-02-24 10:16:59.839539565 +0000 UTC m=+1324.296062108" observedRunningTime="2026-02-24 10:17:01.028314363 +0000 UTC m=+1325.484836906" watchObservedRunningTime="2026-02-24 10:17:01.03017504 +0000 UTC m=+1325.486697603" Feb 24 10:17:05 crc kubenswrapper[4755]: I0224 10:17:05.266521 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9sgpm" Feb 24 10:17:05 crc kubenswrapper[4755]: I0224 10:17:05.495943 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-26cq7" Feb 24 10:17:05 crc kubenswrapper[4755]: I0224 10:17:05.562038 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-c9nhl" Feb 24 10:17:05 crc kubenswrapper[4755]: I0224 10:17:05.641505 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-dvfck" Feb 24 10:17:07 crc kubenswrapper[4755]: I0224 10:17:07.094541 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b926sww" Feb 24 10:17:07 crc kubenswrapper[4755]: I0224 10:17:07.406715 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-58h2c" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.138171 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-776868475c-pnc8b"] Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.141158 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-776868475c-pnc8b" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.143153 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.144086 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.145534 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-2hml8" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.148999 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.150309 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-776868475c-pnc8b"] Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.231647 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-648df8f4f9-vnf58"] Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.232963 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.236518 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.258454 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-648df8f4f9-vnf58"] Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.281665 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2221632c-ed64-4067-b217-f34dc4ca3cfc-config\") pod \"dnsmasq-dns-776868475c-pnc8b\" (UID: \"2221632c-ed64-4067-b217-f34dc4ca3cfc\") " pod="openstack/dnsmasq-dns-776868475c-pnc8b" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.281759 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrlpb\" (UniqueName: \"kubernetes.io/projected/2221632c-ed64-4067-b217-f34dc4ca3cfc-kube-api-access-vrlpb\") pod \"dnsmasq-dns-776868475c-pnc8b\" (UID: \"2221632c-ed64-4067-b217-f34dc4ca3cfc\") " pod="openstack/dnsmasq-dns-776868475c-pnc8b" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.382813 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2221632c-ed64-4067-b217-f34dc4ca3cfc-config\") pod \"dnsmasq-dns-776868475c-pnc8b\" (UID: \"2221632c-ed64-4067-b217-f34dc4ca3cfc\") " pod="openstack/dnsmasq-dns-776868475c-pnc8b" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.384102 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbbf8\" (UniqueName: \"kubernetes.io/projected/ab893943-e1e2-489d-ab08-304b6aa3ea9e-kube-api-access-jbbf8\") pod \"dnsmasq-dns-648df8f4f9-vnf58\" (UID: \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\") " pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.383992 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2221632c-ed64-4067-b217-f34dc4ca3cfc-config\") pod \"dnsmasq-dns-776868475c-pnc8b\" (UID: \"2221632c-ed64-4067-b217-f34dc4ca3cfc\") " pod="openstack/dnsmasq-dns-776868475c-pnc8b" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.384201 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrlpb\" (UniqueName: \"kubernetes.io/projected/2221632c-ed64-4067-b217-f34dc4ca3cfc-kube-api-access-vrlpb\") pod \"dnsmasq-dns-776868475c-pnc8b\" (UID: \"2221632c-ed64-4067-b217-f34dc4ca3cfc\") " pod="openstack/dnsmasq-dns-776868475c-pnc8b" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.384273 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab893943-e1e2-489d-ab08-304b6aa3ea9e-dns-svc\") pod \"dnsmasq-dns-648df8f4f9-vnf58\" (UID: \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\") " pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.384333 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab893943-e1e2-489d-ab08-304b6aa3ea9e-config\") pod \"dnsmasq-dns-648df8f4f9-vnf58\" (UID: \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\") " pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.409035 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrlpb\" (UniqueName: \"kubernetes.io/projected/2221632c-ed64-4067-b217-f34dc4ca3cfc-kube-api-access-vrlpb\") pod \"dnsmasq-dns-776868475c-pnc8b\" (UID: \"2221632c-ed64-4067-b217-f34dc4ca3cfc\") " pod="openstack/dnsmasq-dns-776868475c-pnc8b" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.460978 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-776868475c-pnc8b" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.485257 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab893943-e1e2-489d-ab08-304b6aa3ea9e-dns-svc\") pod \"dnsmasq-dns-648df8f4f9-vnf58\" (UID: \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\") " pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.485310 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab893943-e1e2-489d-ab08-304b6aa3ea9e-config\") pod \"dnsmasq-dns-648df8f4f9-vnf58\" (UID: \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\") " pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.485413 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbbf8\" (UniqueName: \"kubernetes.io/projected/ab893943-e1e2-489d-ab08-304b6aa3ea9e-kube-api-access-jbbf8\") pod \"dnsmasq-dns-648df8f4f9-vnf58\" (UID: \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\") " pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.486299 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab893943-e1e2-489d-ab08-304b6aa3ea9e-dns-svc\") pod \"dnsmasq-dns-648df8f4f9-vnf58\" (UID: \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\") " pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.486724 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab893943-e1e2-489d-ab08-304b6aa3ea9e-config\") pod \"dnsmasq-dns-648df8f4f9-vnf58\" (UID: \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\") " pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.503515 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbbf8\" (UniqueName: \"kubernetes.io/projected/ab893943-e1e2-489d-ab08-304b6aa3ea9e-kube-api-access-jbbf8\") pod \"dnsmasq-dns-648df8f4f9-vnf58\" (UID: \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\") " pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.557547 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.885426 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-776868475c-pnc8b"] Feb 24 10:17:24 crc kubenswrapper[4755]: I0224 10:17:24.990053 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-648df8f4f9-vnf58"] Feb 24 10:17:24 crc kubenswrapper[4755]: W0224 10:17:24.992529 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab893943_e1e2_489d_ab08_304b6aa3ea9e.slice/crio-2651628a9b22db4a41b84c0ad72b06e3db1cbd95daecf9d3ac294a9492e4e01c WatchSource:0}: Error finding container 2651628a9b22db4a41b84c0ad72b06e3db1cbd95daecf9d3ac294a9492e4e01c: Status 404 returned error can't find the container with id 2651628a9b22db4a41b84c0ad72b06e3db1cbd95daecf9d3ac294a9492e4e01c Feb 24 10:17:25 crc kubenswrapper[4755]: I0224 10:17:25.203467 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" event={"ID":"ab893943-e1e2-489d-ab08-304b6aa3ea9e","Type":"ContainerStarted","Data":"2651628a9b22db4a41b84c0ad72b06e3db1cbd95daecf9d3ac294a9492e4e01c"} Feb 24 10:17:25 crc kubenswrapper[4755]: I0224 10:17:25.204876 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-776868475c-pnc8b" event={"ID":"2221632c-ed64-4067-b217-f34dc4ca3cfc","Type":"ContainerStarted","Data":"60b2bfdcece9ec81e0e4470001c7496560dd0c136c7c6228a283a66561681275"} Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.069841 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-648df8f4f9-vnf58"] Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.082991 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9fb6c8b7-sdps9"] Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.092876 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.101252 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9fb6c8b7-sdps9"] Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.217830 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlz8n\" (UniqueName: \"kubernetes.io/projected/d59fa737-51c8-412d-b6c5-150038e26abb-kube-api-access-nlz8n\") pod \"dnsmasq-dns-5c9fb6c8b7-sdps9\" (UID: \"d59fa737-51c8-412d-b6c5-150038e26abb\") " pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.217907 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d59fa737-51c8-412d-b6c5-150038e26abb-config\") pod \"dnsmasq-dns-5c9fb6c8b7-sdps9\" (UID: \"d59fa737-51c8-412d-b6c5-150038e26abb\") " pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.218170 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d59fa737-51c8-412d-b6c5-150038e26abb-dns-svc\") pod \"dnsmasq-dns-5c9fb6c8b7-sdps9\" (UID: \"d59fa737-51c8-412d-b6c5-150038e26abb\") " pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.320705 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d59fa737-51c8-412d-b6c5-150038e26abb-dns-svc\") pod \"dnsmasq-dns-5c9fb6c8b7-sdps9\" (UID: \"d59fa737-51c8-412d-b6c5-150038e26abb\") " pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.320760 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlz8n\" (UniqueName: \"kubernetes.io/projected/d59fa737-51c8-412d-b6c5-150038e26abb-kube-api-access-nlz8n\") pod \"dnsmasq-dns-5c9fb6c8b7-sdps9\" (UID: \"d59fa737-51c8-412d-b6c5-150038e26abb\") " pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.320791 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d59fa737-51c8-412d-b6c5-150038e26abb-config\") pod \"dnsmasq-dns-5c9fb6c8b7-sdps9\" (UID: \"d59fa737-51c8-412d-b6c5-150038e26abb\") " pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.321700 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d59fa737-51c8-412d-b6c5-150038e26abb-config\") pod \"dnsmasq-dns-5c9fb6c8b7-sdps9\" (UID: \"d59fa737-51c8-412d-b6c5-150038e26abb\") " pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.322192 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d59fa737-51c8-412d-b6c5-150038e26abb-dns-svc\") pod \"dnsmasq-dns-5c9fb6c8b7-sdps9\" (UID: \"d59fa737-51c8-412d-b6c5-150038e26abb\") " pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.369059 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlz8n\" (UniqueName: \"kubernetes.io/projected/d59fa737-51c8-412d-b6c5-150038e26abb-kube-api-access-nlz8n\") pod \"dnsmasq-dns-5c9fb6c8b7-sdps9\" (UID: \"d59fa737-51c8-412d-b6c5-150038e26abb\") " pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.427745 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.864729 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9fb6c8b7-sdps9"] Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.977860 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-776868475c-pnc8b"] Feb 24 10:17:26 crc kubenswrapper[4755]: I0224 10:17:26.996923 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dfb8ff55f-z5jxd"] Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.007239 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dfb8ff55f-z5jxd"] Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.007626 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.130897 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr6dk\" (UniqueName: \"kubernetes.io/projected/7872be5b-22c5-441e-8b92-fccc79705037-kube-api-access-wr6dk\") pod \"dnsmasq-dns-6dfb8ff55f-z5jxd\" (UID: \"7872be5b-22c5-441e-8b92-fccc79705037\") " pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.130994 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7872be5b-22c5-441e-8b92-fccc79705037-dns-svc\") pod \"dnsmasq-dns-6dfb8ff55f-z5jxd\" (UID: \"7872be5b-22c5-441e-8b92-fccc79705037\") " pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.131042 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7872be5b-22c5-441e-8b92-fccc79705037-config\") pod \"dnsmasq-dns-6dfb8ff55f-z5jxd\" (UID: \"7872be5b-22c5-441e-8b92-fccc79705037\") " pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.235104 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7872be5b-22c5-441e-8b92-fccc79705037-dns-svc\") pod \"dnsmasq-dns-6dfb8ff55f-z5jxd\" (UID: \"7872be5b-22c5-441e-8b92-fccc79705037\") " pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.235205 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7872be5b-22c5-441e-8b92-fccc79705037-config\") pod \"dnsmasq-dns-6dfb8ff55f-z5jxd\" (UID: \"7872be5b-22c5-441e-8b92-fccc79705037\") " pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.235346 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr6dk\" (UniqueName: \"kubernetes.io/projected/7872be5b-22c5-441e-8b92-fccc79705037-kube-api-access-wr6dk\") pod \"dnsmasq-dns-6dfb8ff55f-z5jxd\" (UID: \"7872be5b-22c5-441e-8b92-fccc79705037\") " pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.237981 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7872be5b-22c5-441e-8b92-fccc79705037-dns-svc\") pod \"dnsmasq-dns-6dfb8ff55f-z5jxd\" (UID: \"7872be5b-22c5-441e-8b92-fccc79705037\") " pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.238578 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7872be5b-22c5-441e-8b92-fccc79705037-config\") pod \"dnsmasq-dns-6dfb8ff55f-z5jxd\" (UID: \"7872be5b-22c5-441e-8b92-fccc79705037\") " pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.260131 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" event={"ID":"d59fa737-51c8-412d-b6c5-150038e26abb","Type":"ContainerStarted","Data":"e845c1e4a523f050d62ab4e09b523ae43c57c208fb681e01ea0ae1f7e0d0432b"} Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.278252 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.289701 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.293140 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.293204 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.293281 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr6dk\" (UniqueName: \"kubernetes.io/projected/7872be5b-22c5-441e-8b92-fccc79705037-kube-api-access-wr6dk\") pod \"dnsmasq-dns-6dfb8ff55f-z5jxd\" (UID: \"7872be5b-22c5-441e-8b92-fccc79705037\") " pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.293569 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.293678 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.293693 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-tbpjc" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.293737 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.293698 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.299655 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.330212 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.441292 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2slb8\" (UniqueName: \"kubernetes.io/projected/6f17c8cb-fdce-4338-a08b-351043733dd8-kube-api-access-2slb8\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.441332 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f17c8cb-fdce-4338-a08b-351043733dd8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.441444 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f17c8cb-fdce-4338-a08b-351043733dd8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.441476 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.441508 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f17c8cb-fdce-4338-a08b-351043733dd8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.441621 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f17c8cb-fdce-4338-a08b-351043733dd8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.442221 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f17c8cb-fdce-4338-a08b-351043733dd8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.442302 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f17c8cb-fdce-4338-a08b-351043733dd8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.442420 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f17c8cb-fdce-4338-a08b-351043733dd8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.442492 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f17c8cb-fdce-4338-a08b-351043733dd8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.442551 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f17c8cb-fdce-4338-a08b-351043733dd8-config-data\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.543313 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f17c8cb-fdce-4338-a08b-351043733dd8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.543349 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f17c8cb-fdce-4338-a08b-351043733dd8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.543368 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f17c8cb-fdce-4338-a08b-351043733dd8-config-data\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.543394 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2slb8\" (UniqueName: \"kubernetes.io/projected/6f17c8cb-fdce-4338-a08b-351043733dd8-kube-api-access-2slb8\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.543434 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f17c8cb-fdce-4338-a08b-351043733dd8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.543469 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f17c8cb-fdce-4338-a08b-351043733dd8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.543486 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.543531 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f17c8cb-fdce-4338-a08b-351043733dd8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.543562 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f17c8cb-fdce-4338-a08b-351043733dd8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.543683 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f17c8cb-fdce-4338-a08b-351043733dd8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.543703 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f17c8cb-fdce-4338-a08b-351043733dd8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.545017 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f17c8cb-fdce-4338-a08b-351043733dd8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.545524 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f17c8cb-fdce-4338-a08b-351043733dd8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.545771 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f17c8cb-fdce-4338-a08b-351043733dd8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.546286 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f17c8cb-fdce-4338-a08b-351043733dd8-config-data\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.549921 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f17c8cb-fdce-4338-a08b-351043733dd8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.550496 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.552992 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f17c8cb-fdce-4338-a08b-351043733dd8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.555416 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f17c8cb-fdce-4338-a08b-351043733dd8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.555573 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f17c8cb-fdce-4338-a08b-351043733dd8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.559376 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f17c8cb-fdce-4338-a08b-351043733dd8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.565022 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2slb8\" (UniqueName: \"kubernetes.io/projected/6f17c8cb-fdce-4338-a08b-351043733dd8-kube-api-access-2slb8\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.599560 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"6f17c8cb-fdce-4338-a08b-351043733dd8\") " pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.616364 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 24 10:17:27 crc kubenswrapper[4755]: I0224 10:17:27.783980 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dfb8ff55f-z5jxd"] Feb 24 10:17:27 crc kubenswrapper[4755]: W0224 10:17:27.794028 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7872be5b_22c5_441e_8b92_fccc79705037.slice/crio-741b5139bc2d2b654c8bce4692804117c4b5b1a7ddf8e07f1bb52aaf90bc766f WatchSource:0}: Error finding container 741b5139bc2d2b654c8bce4692804117c4b5b1a7ddf8e07f1bb52aaf90bc766f: Status 404 returned error can't find the container with id 741b5139bc2d2b654c8bce4692804117c4b5b1a7ddf8e07f1bb52aaf90bc766f Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.089164 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.108756 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.110242 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.114194 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.114636 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.114788 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.115046 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.115202 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.115428 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xjxc2" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.115593 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.127087 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.254602 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36ea9987-0fd2-4c40-845d-463254c1fecf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.254655 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36ea9987-0fd2-4c40-845d-463254c1fecf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.254676 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36ea9987-0fd2-4c40-845d-463254c1fecf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.254710 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36ea9987-0fd2-4c40-845d-463254c1fecf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.254727 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36ea9987-0fd2-4c40-845d-463254c1fecf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.254741 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36ea9987-0fd2-4c40-845d-463254c1fecf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.254768 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36ea9987-0fd2-4c40-845d-463254c1fecf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.254790 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36ea9987-0fd2-4c40-845d-463254c1fecf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.254819 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sjv5\" (UniqueName: \"kubernetes.io/projected/36ea9987-0fd2-4c40-845d-463254c1fecf-kube-api-access-2sjv5\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.254852 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.254870 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36ea9987-0fd2-4c40-845d-463254c1fecf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.279296 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" event={"ID":"7872be5b-22c5-441e-8b92-fccc79705037","Type":"ContainerStarted","Data":"741b5139bc2d2b654c8bce4692804117c4b5b1a7ddf8e07f1bb52aaf90bc766f"} Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.280442 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6f17c8cb-fdce-4338-a08b-351043733dd8","Type":"ContainerStarted","Data":"61e8c06bd21e73d37d7021750a80778a56691b6994c348b6f1884de55af96632"} Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.356988 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36ea9987-0fd2-4c40-845d-463254c1fecf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.357047 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36ea9987-0fd2-4c40-845d-463254c1fecf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.357081 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36ea9987-0fd2-4c40-845d-463254c1fecf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.357108 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36ea9987-0fd2-4c40-845d-463254c1fecf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.357130 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36ea9987-0fd2-4c40-845d-463254c1fecf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.357159 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sjv5\" (UniqueName: \"kubernetes.io/projected/36ea9987-0fd2-4c40-845d-463254c1fecf-kube-api-access-2sjv5\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.357194 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.357217 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36ea9987-0fd2-4c40-845d-463254c1fecf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.357257 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36ea9987-0fd2-4c40-845d-463254c1fecf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.357285 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36ea9987-0fd2-4c40-845d-463254c1fecf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.357299 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36ea9987-0fd2-4c40-845d-463254c1fecf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.358005 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36ea9987-0fd2-4c40-845d-463254c1fecf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.358636 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36ea9987-0fd2-4c40-845d-463254c1fecf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.358699 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36ea9987-0fd2-4c40-845d-463254c1fecf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.358995 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36ea9987-0fd2-4c40-845d-463254c1fecf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.359088 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.359196 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36ea9987-0fd2-4c40-845d-463254c1fecf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.366557 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36ea9987-0fd2-4c40-845d-463254c1fecf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.367443 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36ea9987-0fd2-4c40-845d-463254c1fecf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.369446 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36ea9987-0fd2-4c40-845d-463254c1fecf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.370316 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36ea9987-0fd2-4c40-845d-463254c1fecf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.376740 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sjv5\" (UniqueName: \"kubernetes.io/projected/36ea9987-0fd2-4c40-845d-463254c1fecf-kube-api-access-2sjv5\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.395329 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"36ea9987-0fd2-4c40-845d-463254c1fecf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.440389 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:17:28 crc kubenswrapper[4755]: I0224 10:17:28.985390 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.297588 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"36ea9987-0fd2-4c40-845d-463254c1fecf","Type":"ContainerStarted","Data":"f026072c76864580cd9440bc1d9920f6f3105c12a3e1607b08890d86c9d70089"} Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.553717 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.554941 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.557294 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.558510 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.558788 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6c2rb" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.559680 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.563620 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.577299 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.674953 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmnx6\" (UniqueName: \"kubernetes.io/projected/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-kube-api-access-xmnx6\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.675010 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.675044 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.675087 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-kolla-config\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.675105 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.675124 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-config-data-default\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.675178 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.675196 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.776813 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.776890 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.776923 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-kolla-config\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.776947 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.776971 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-config-data-default\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.778336 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.778642 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.779895 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.780363 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-kolla-config\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.781004 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-config-data-default\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.777027 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.781431 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.781503 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmnx6\" (UniqueName: \"kubernetes.io/projected/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-kube-api-access-xmnx6\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.783171 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.791821 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.813500 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmnx6\" (UniqueName: \"kubernetes.io/projected/fe13802e-a28d-4e11-a315-c0ae66bf0e1c-kube-api-access-xmnx6\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.863436 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"fe13802e-a28d-4e11-a315-c0ae66bf0e1c\") " pod="openstack/openstack-galera-0" Feb 24 10:17:29 crc kubenswrapper[4755]: I0224 10:17:29.886605 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 24 10:17:30 crc kubenswrapper[4755]: I0224 10:17:30.961545 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 10:17:30 crc kubenswrapper[4755]: I0224 10:17:30.962850 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:30 crc kubenswrapper[4755]: I0224 10:17:30.966593 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 24 10:17:30 crc kubenswrapper[4755]: I0224 10:17:30.972925 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 10:17:30 crc kubenswrapper[4755]: I0224 10:17:30.973010 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 24 10:17:30 crc kubenswrapper[4755]: I0224 10:17:30.973188 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 24 10:17:30 crc kubenswrapper[4755]: I0224 10:17:30.973348 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-b7lwz" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.005896 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f320527-691f-48e9-a243-f60bc805da39-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.005956 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2f320527-691f-48e9-a243-f60bc805da39-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.005981 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f320527-691f-48e9-a243-f60bc805da39-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.006013 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f320527-691f-48e9-a243-f60bc805da39-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.006040 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc4mx\" (UniqueName: \"kubernetes.io/projected/2f320527-691f-48e9-a243-f60bc805da39-kube-api-access-gc4mx\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.006087 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2f320527-691f-48e9-a243-f60bc805da39-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.006119 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2f320527-691f-48e9-a243-f60bc805da39-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.006319 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.112625 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2f320527-691f-48e9-a243-f60bc805da39-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.112709 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.112738 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f320527-691f-48e9-a243-f60bc805da39-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.112778 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2f320527-691f-48e9-a243-f60bc805da39-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.112802 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f320527-691f-48e9-a243-f60bc805da39-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.112842 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f320527-691f-48e9-a243-f60bc805da39-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.112874 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc4mx\" (UniqueName: \"kubernetes.io/projected/2f320527-691f-48e9-a243-f60bc805da39-kube-api-access-gc4mx\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.112930 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2f320527-691f-48e9-a243-f60bc805da39-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.113432 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.113503 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2f320527-691f-48e9-a243-f60bc805da39-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.113894 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2f320527-691f-48e9-a243-f60bc805da39-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.114836 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f320527-691f-48e9-a243-f60bc805da39-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.114856 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2f320527-691f-48e9-a243-f60bc805da39-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.118162 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f320527-691f-48e9-a243-f60bc805da39-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.118166 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f320527-691f-48e9-a243-f60bc805da39-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.127446 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc4mx\" (UniqueName: \"kubernetes.io/projected/2f320527-691f-48e9-a243-f60bc805da39-kube-api-access-gc4mx\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.143537 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2f320527-691f-48e9-a243-f60bc805da39\") " pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.256598 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.257498 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.261468 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5dxq8" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.261702 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.261830 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.277207 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.288913 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.315094 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c31ba439-10a5-48d6-abb0-0cf70ce01151-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.315190 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c31ba439-10a5-48d6-abb0-0cf70ce01151-kolla-config\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.315228 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbvpf\" (UniqueName: \"kubernetes.io/projected/c31ba439-10a5-48d6-abb0-0cf70ce01151-kube-api-access-hbvpf\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.315284 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c31ba439-10a5-48d6-abb0-0cf70ce01151-config-data\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.315373 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c31ba439-10a5-48d6-abb0-0cf70ce01151-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.417964 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c31ba439-10a5-48d6-abb0-0cf70ce01151-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.418098 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c31ba439-10a5-48d6-abb0-0cf70ce01151-kolla-config\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.418176 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbvpf\" (UniqueName: \"kubernetes.io/projected/c31ba439-10a5-48d6-abb0-0cf70ce01151-kube-api-access-hbvpf\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.418231 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c31ba439-10a5-48d6-abb0-0cf70ce01151-config-data\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.418253 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c31ba439-10a5-48d6-abb0-0cf70ce01151-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.420907 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c31ba439-10a5-48d6-abb0-0cf70ce01151-config-data\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.421416 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c31ba439-10a5-48d6-abb0-0cf70ce01151-kolla-config\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.422807 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c31ba439-10a5-48d6-abb0-0cf70ce01151-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.437256 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbvpf\" (UniqueName: \"kubernetes.io/projected/c31ba439-10a5-48d6-abb0-0cf70ce01151-kube-api-access-hbvpf\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.441328 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c31ba439-10a5-48d6-abb0-0cf70ce01151-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c31ba439-10a5-48d6-abb0-0cf70ce01151\") " pod="openstack/memcached-0" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.456017 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45624: no serving certificate available for the kubelet" Feb 24 10:17:31 crc kubenswrapper[4755]: I0224 10:17:31.584622 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 24 10:17:33 crc kubenswrapper[4755]: I0224 10:17:33.451215 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 10:17:33 crc kubenswrapper[4755]: I0224 10:17:33.452954 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 24 10:17:33 crc kubenswrapper[4755]: I0224 10:17:33.456256 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-jpd6m" Feb 24 10:17:33 crc kubenswrapper[4755]: I0224 10:17:33.461770 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 10:17:33 crc kubenswrapper[4755]: I0224 10:17:33.554159 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jhw\" (UniqueName: \"kubernetes.io/projected/1e0fc0d5-6405-40e7-9203-0f937932b957-kube-api-access-d6jhw\") pod \"kube-state-metrics-0\" (UID: \"1e0fc0d5-6405-40e7-9203-0f937932b957\") " pod="openstack/kube-state-metrics-0" Feb 24 10:17:33 crc kubenswrapper[4755]: I0224 10:17:33.656682 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6jhw\" (UniqueName: \"kubernetes.io/projected/1e0fc0d5-6405-40e7-9203-0f937932b957-kube-api-access-d6jhw\") pod \"kube-state-metrics-0\" (UID: \"1e0fc0d5-6405-40e7-9203-0f937932b957\") " pod="openstack/kube-state-metrics-0" Feb 24 10:17:33 crc kubenswrapper[4755]: I0224 10:17:33.674954 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6jhw\" (UniqueName: \"kubernetes.io/projected/1e0fc0d5-6405-40e7-9203-0f937932b957-kube-api-access-d6jhw\") pod \"kube-state-metrics-0\" (UID: \"1e0fc0d5-6405-40e7-9203-0f937932b957\") " pod="openstack/kube-state-metrics-0" Feb 24 10:17:33 crc kubenswrapper[4755]: I0224 10:17:33.774981 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.582036 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.583796 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.586187 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.586382 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.586511 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.586641 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.594554 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-4nr2m" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.615598 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.692224 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/80f22bc9-819c-4a7d-bfc9-5032341c6b98-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.692620 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80f22bc9-819c-4a7d-bfc9-5032341c6b98-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.692776 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f22bc9-819c-4a7d-bfc9-5032341c6b98-config\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.692858 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f22bc9-819c-4a7d-bfc9-5032341c6b98-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.692892 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80f22bc9-819c-4a7d-bfc9-5032341c6b98-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.692923 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85skd\" (UniqueName: \"kubernetes.io/projected/80f22bc9-819c-4a7d-bfc9-5032341c6b98-kube-api-access-85skd\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.693011 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80f22bc9-819c-4a7d-bfc9-5032341c6b98-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.693055 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.794886 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80f22bc9-819c-4a7d-bfc9-5032341c6b98-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.795675 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f22bc9-819c-4a7d-bfc9-5032341c6b98-config\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.795733 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f22bc9-819c-4a7d-bfc9-5032341c6b98-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.795762 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80f22bc9-819c-4a7d-bfc9-5032341c6b98-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.795788 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85skd\" (UniqueName: \"kubernetes.io/projected/80f22bc9-819c-4a7d-bfc9-5032341c6b98-kube-api-access-85skd\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.795817 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80f22bc9-819c-4a7d-bfc9-5032341c6b98-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.795840 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.795901 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/80f22bc9-819c-4a7d-bfc9-5032341c6b98-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.796127 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.796318 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/80f22bc9-819c-4a7d-bfc9-5032341c6b98-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.796624 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f22bc9-819c-4a7d-bfc9-5032341c6b98-config\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.797177 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80f22bc9-819c-4a7d-bfc9-5032341c6b98-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.799801 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80f22bc9-819c-4a7d-bfc9-5032341c6b98-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.805741 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80f22bc9-819c-4a7d-bfc9-5032341c6b98-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.808021 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f22bc9-819c-4a7d-bfc9-5032341c6b98-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.810407 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85skd\" (UniqueName: \"kubernetes.io/projected/80f22bc9-819c-4a7d-bfc9-5032341c6b98-kube-api-access-85skd\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.814392 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"80f22bc9-819c-4a7d-bfc9-5032341c6b98\") " pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:36 crc kubenswrapper[4755]: I0224 10:17:36.922133 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.038417 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7zjmh"] Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.039323 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.043054 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-b86fd" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.044327 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.045035 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.050712 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7zjmh"] Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.057799 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-956vf"] Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.059390 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.077406 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-956vf"] Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.200282 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9kxc\" (UniqueName: \"kubernetes.io/projected/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-kube-api-access-h9kxc\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.200335 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-combined-ca-bundle\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.200425 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/43fb9b2a-5231-468f-972a-4bf96bbeaae4-etc-ovs\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.200459 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-var-run\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.200504 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-var-log-ovn\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.200526 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-scripts\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.200548 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-var-run-ovn\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.200648 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43fb9b2a-5231-468f-972a-4bf96bbeaae4-scripts\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.200728 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqz2f\" (UniqueName: \"kubernetes.io/projected/43fb9b2a-5231-468f-972a-4bf96bbeaae4-kube-api-access-jqz2f\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.200835 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-ovn-controller-tls-certs\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.200932 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/43fb9b2a-5231-468f-972a-4bf96bbeaae4-var-lib\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.200970 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/43fb9b2a-5231-468f-972a-4bf96bbeaae4-var-log\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.200997 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/43fb9b2a-5231-468f-972a-4bf96bbeaae4-var-run\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.302751 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-var-run\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.302819 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-var-log-ovn\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.302838 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-scripts\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.302853 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-var-run-ovn\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.302874 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43fb9b2a-5231-468f-972a-4bf96bbeaae4-scripts\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.302895 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqz2f\" (UniqueName: \"kubernetes.io/projected/43fb9b2a-5231-468f-972a-4bf96bbeaae4-kube-api-access-jqz2f\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.302927 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-ovn-controller-tls-certs\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.302958 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/43fb9b2a-5231-468f-972a-4bf96bbeaae4-var-lib\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.302976 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/43fb9b2a-5231-468f-972a-4bf96bbeaae4-var-log\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.302993 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/43fb9b2a-5231-468f-972a-4bf96bbeaae4-var-run\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.303016 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9kxc\" (UniqueName: \"kubernetes.io/projected/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-kube-api-access-h9kxc\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.303033 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-combined-ca-bundle\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.303093 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/43fb9b2a-5231-468f-972a-4bf96bbeaae4-etc-ovs\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.303449 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-var-log-ovn\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.303490 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/43fb9b2a-5231-468f-972a-4bf96bbeaae4-etc-ovs\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.303588 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/43fb9b2a-5231-468f-972a-4bf96bbeaae4-var-log\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.303603 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-var-run\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.303646 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/43fb9b2a-5231-468f-972a-4bf96bbeaae4-var-run\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.303652 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-var-run-ovn\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.303915 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/43fb9b2a-5231-468f-972a-4bf96bbeaae4-var-lib\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.305390 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-scripts\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.308947 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-ovn-controller-tls-certs\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.309282 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-combined-ca-bundle\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.318756 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43fb9b2a-5231-468f-972a-4bf96bbeaae4-scripts\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.322723 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqz2f\" (UniqueName: \"kubernetes.io/projected/43fb9b2a-5231-468f-972a-4bf96bbeaae4-kube-api-access-jqz2f\") pod \"ovn-controller-ovs-956vf\" (UID: \"43fb9b2a-5231-468f-972a-4bf96bbeaae4\") " pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.324177 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9kxc\" (UniqueName: \"kubernetes.io/projected/22e0eb37-c9e8-4e7c-a986-09c4c7700bd7-kube-api-access-h9kxc\") pod \"ovn-controller-7zjmh\" (UID: \"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7\") " pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.358026 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:37 crc kubenswrapper[4755]: I0224 10:17:37.391917 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:38 crc kubenswrapper[4755]: I0224 10:17:38.647765 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.867091 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.869117 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.870953 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.871198 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-pcv48" Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.871396 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.872562 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.881167 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.975613 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a045a5b-47f3-4eca-964c-0703e8333c88-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.975918 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a045a5b-47f3-4eca-964c-0703e8333c88-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.975964 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkggn\" (UniqueName: \"kubernetes.io/projected/1a045a5b-47f3-4eca-964c-0703e8333c88-kube-api-access-fkggn\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.975992 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1a045a5b-47f3-4eca-964c-0703e8333c88-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.976029 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a045a5b-47f3-4eca-964c-0703e8333c88-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.976099 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a045a5b-47f3-4eca-964c-0703e8333c88-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.976147 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a045a5b-47f3-4eca-964c-0703e8333c88-config\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:40 crc kubenswrapper[4755]: I0224 10:17:40.976194 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.077757 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1a045a5b-47f3-4eca-964c-0703e8333c88-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.077807 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a045a5b-47f3-4eca-964c-0703e8333c88-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.077841 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a045a5b-47f3-4eca-964c-0703e8333c88-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.077897 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a045a5b-47f3-4eca-964c-0703e8333c88-config\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.078328 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1a045a5b-47f3-4eca-964c-0703e8333c88-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.078926 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a045a5b-47f3-4eca-964c-0703e8333c88-config\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.078988 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a045a5b-47f3-4eca-964c-0703e8333c88-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.079048 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.079313 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.079458 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a045a5b-47f3-4eca-964c-0703e8333c88-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.079511 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a045a5b-47f3-4eca-964c-0703e8333c88-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.079539 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkggn\" (UniqueName: \"kubernetes.io/projected/1a045a5b-47f3-4eca-964c-0703e8333c88-kube-api-access-fkggn\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.086166 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a045a5b-47f3-4eca-964c-0703e8333c88-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.086465 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a045a5b-47f3-4eca-964c-0703e8333c88-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.092291 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a045a5b-47f3-4eca-964c-0703e8333c88-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.101034 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkggn\" (UniqueName: \"kubernetes.io/projected/1a045a5b-47f3-4eca-964c-0703e8333c88-kube-api-access-fkggn\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.101045 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a045a5b-47f3-4eca-964c-0703e8333c88\") " pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:41 crc kubenswrapper[4755]: I0224 10:17:41.221084 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:44 crc kubenswrapper[4755]: W0224 10:17:44.633225 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f320527_691f_48e9_a243_f60bc805da39.slice/crio-d941a1fea5f5ea80b3dae38ddb2d83ff307290f925cbc7bdee5f8d3c0de70c4d WatchSource:0}: Error finding container d941a1fea5f5ea80b3dae38ddb2d83ff307290f925cbc7bdee5f8d3c0de70c4d: Status 404 returned error can't find the container with id d941a1fea5f5ea80b3dae38ddb2d83ff307290f925cbc7bdee5f8d3c0de70c4d Feb 24 10:17:45 crc kubenswrapper[4755]: E0224 10:17:45.437528 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" Feb 24 10:17:45 crc kubenswrapper[4755]: E0224 10:17:45.437879 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr6dk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6dfb8ff55f-z5jxd_openstack(7872be5b-22c5-441e-8b92-fccc79705037): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 10:17:45 crc kubenswrapper[4755]: E0224 10:17:45.439370 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" podUID="7872be5b-22c5-441e-8b92-fccc79705037" Feb 24 10:17:45 crc kubenswrapper[4755]: E0224 10:17:45.442194 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" Feb 24 10:17:45 crc kubenswrapper[4755]: E0224 10:17:45.442347 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrlpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-776868475c-pnc8b_openstack(2221632c-ed64-4067-b217-f34dc4ca3cfc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 10:17:45 crc kubenswrapper[4755]: E0224 10:17:45.445034 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-776868475c-pnc8b" podUID="2221632c-ed64-4067-b217-f34dc4ca3cfc" Feb 24 10:17:45 crc kubenswrapper[4755]: E0224 10:17:45.445591 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" Feb 24 10:17:45 crc kubenswrapper[4755]: E0224 10:17:45.445706 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbbf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-648df8f4f9-vnf58_openstack(ab893943-e1e2-489d-ab08-304b6aa3ea9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 10:17:45 crc kubenswrapper[4755]: E0224 10:17:45.446828 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" podUID="ab893943-e1e2-489d-ab08-304b6aa3ea9e" Feb 24 10:17:45 crc kubenswrapper[4755]: I0224 10:17:45.464900 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerStarted","Data":"d941a1fea5f5ea80b3dae38ddb2d83ff307290f925cbc7bdee5f8d3c0de70c4d"} Feb 24 10:17:45 crc kubenswrapper[4755]: E0224 10:17:45.469032 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd" Feb 24 10:17:45 crc kubenswrapper[4755]: E0224 10:17:45.469163 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlz8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5c9fb6c8b7-sdps9_openstack(d59fa737-51c8-412d-b6c5-150038e26abb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 10:17:45 crc kubenswrapper[4755]: E0224 10:17:45.469481 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd\\\"\"" pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" podUID="7872be5b-22c5-441e-8b92-fccc79705037" Feb 24 10:17:45 crc kubenswrapper[4755]: E0224 10:17:45.471204 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" podUID="d59fa737-51c8-412d-b6c5-150038e26abb" Feb 24 10:17:45 crc kubenswrapper[4755]: I0224 10:17:45.790543 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 24 10:17:45 crc kubenswrapper[4755]: W0224 10:17:45.798209 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe13802e_a28d_4e11_a315_c0ae66bf0e1c.slice/crio-42b9003fefc442c03a546fe39edf862a2f3a72f39a11d2273bef944060b9e342 WatchSource:0}: Error finding container 42b9003fefc442c03a546fe39edf862a2f3a72f39a11d2273bef944060b9e342: Status 404 returned error can't find the container with id 42b9003fefc442c03a546fe39edf862a2f3a72f39a11d2273bef944060b9e342 Feb 24 10:17:45 crc kubenswrapper[4755]: I0224 10:17:45.966361 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 10:17:45 crc kubenswrapper[4755]: W0224 10:17:45.973511 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc31ba439_10a5_48d6_abb0_0cf70ce01151.slice/crio-f2ae1a3e412cdce52bd079c2e56a257a6e4e105f8621ec724f156f440b3e6d87 WatchSource:0}: Error finding container f2ae1a3e412cdce52bd079c2e56a257a6e4e105f8621ec724f156f440b3e6d87: Status 404 returned error can't find the container with id f2ae1a3e412cdce52bd079c2e56a257a6e4e105f8621ec724f156f440b3e6d87 Feb 24 10:17:45 crc kubenswrapper[4755]: I0224 10:17:45.977539 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 24 10:17:45 crc kubenswrapper[4755]: W0224 10:17:45.988035 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e0fc0d5_6405_40e7_9203_0f937932b957.slice/crio-fcb745beabff5b2d0ed1ec82f1cf8f93f85dc9035957c92452df09e757b388a5 WatchSource:0}: Error finding container fcb745beabff5b2d0ed1ec82f1cf8f93f85dc9035957c92452df09e757b388a5: Status 404 returned error can't find the container with id fcb745beabff5b2d0ed1ec82f1cf8f93f85dc9035957c92452df09e757b388a5 Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.025680 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-776868475c-pnc8b" Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.088911 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.121050 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7zjmh"] Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.180137 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2221632c-ed64-4067-b217-f34dc4ca3cfc-config\") pod \"2221632c-ed64-4067-b217-f34dc4ca3cfc\" (UID: \"2221632c-ed64-4067-b217-f34dc4ca3cfc\") " Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.180294 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrlpb\" (UniqueName: \"kubernetes.io/projected/2221632c-ed64-4067-b217-f34dc4ca3cfc-kube-api-access-vrlpb\") pod \"2221632c-ed64-4067-b217-f34dc4ca3cfc\" (UID: \"2221632c-ed64-4067-b217-f34dc4ca3cfc\") " Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.181270 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2221632c-ed64-4067-b217-f34dc4ca3cfc-config" (OuterVolumeSpecName: "config") pod "2221632c-ed64-4067-b217-f34dc4ca3cfc" (UID: "2221632c-ed64-4067-b217-f34dc4ca3cfc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.190692 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-956vf"] Feb 24 10:17:46 crc kubenswrapper[4755]: W0224 10:17:46.192255 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43fb9b2a_5231_468f_972a_4bf96bbeaae4.slice/crio-17a51b6a81e8045857ef3e72895fe4f10b5dbd80950b05670b973ea42e4f522e WatchSource:0}: Error finding container 17a51b6a81e8045857ef3e72895fe4f10b5dbd80950b05670b973ea42e4f522e: Status 404 returned error can't find the container with id 17a51b6a81e8045857ef3e72895fe4f10b5dbd80950b05670b973ea42e4f522e Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.280493 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2221632c-ed64-4067-b217-f34dc4ca3cfc-kube-api-access-vrlpb" (OuterVolumeSpecName: "kube-api-access-vrlpb") pod "2221632c-ed64-4067-b217-f34dc4ca3cfc" (UID: "2221632c-ed64-4067-b217-f34dc4ca3cfc"). InnerVolumeSpecName "kube-api-access-vrlpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.282053 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2221632c-ed64-4067-b217-f34dc4ca3cfc-config\") on node \"crc\" DevicePath \"\"" Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.282099 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrlpb\" (UniqueName: \"kubernetes.io/projected/2221632c-ed64-4067-b217-f34dc4ca3cfc-kube-api-access-vrlpb\") on node \"crc\" DevicePath \"\"" Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.472681 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6f17c8cb-fdce-4338-a08b-351043733dd8","Type":"ContainerStarted","Data":"4398a9085d6666f29c47caee783a17004fe8ee15c1174181c3dc3c79c676743b"} Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.474055 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7zjmh" event={"ID":"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7","Type":"ContainerStarted","Data":"63e2e3b2906fc281c2ffa5b70371b68523730e40997c90260b9fb85201681b7b"} Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.475486 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c31ba439-10a5-48d6-abb0-0cf70ce01151","Type":"ContainerStarted","Data":"f2ae1a3e412cdce52bd079c2e56a257a6e4e105f8621ec724f156f440b3e6d87"} Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.476852 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-956vf" event={"ID":"43fb9b2a-5231-468f-972a-4bf96bbeaae4","Type":"ContainerStarted","Data":"17a51b6a81e8045857ef3e72895fe4f10b5dbd80950b05670b973ea42e4f522e"} Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.478362 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1a045a5b-47f3-4eca-964c-0703e8333c88","Type":"ContainerStarted","Data":"279e11155c8989562399ebcba4059a0d4f39225bc7d226e90f44b7d54ce2649c"} Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.480732 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerStarted","Data":"42b9003fefc442c03a546fe39edf862a2f3a72f39a11d2273bef944060b9e342"} Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.481887 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"36ea9987-0fd2-4c40-845d-463254c1fecf","Type":"ContainerStarted","Data":"3bbfb182a229dbbd646b146a6041ddeaa1238db4c51ce14ad502b3e53e080b8a"} Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.482900 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1e0fc0d5-6405-40e7-9203-0f937932b957","Type":"ContainerStarted","Data":"fcb745beabff5b2d0ed1ec82f1cf8f93f85dc9035957c92452df09e757b388a5"} Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.483847 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-776868475c-pnc8b" event={"ID":"2221632c-ed64-4067-b217-f34dc4ca3cfc","Type":"ContainerDied","Data":"60b2bfdcece9ec81e0e4470001c7496560dd0c136c7c6228a283a66561681275"} Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.483922 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-776868475c-pnc8b" Feb 24 10:17:46 crc kubenswrapper[4755]: E0224 10:17:46.484821 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd\\\"\"" pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" podUID="d59fa737-51c8-412d-b6c5-150038e26abb" Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.542301 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-776868475c-pnc8b"] Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.550421 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-776868475c-pnc8b"] Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.660301 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 10:17:46 crc kubenswrapper[4755]: I0224 10:17:46.878823 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" Feb 24 10:17:47 crc kubenswrapper[4755]: I0224 10:17:47.015776 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab893943-e1e2-489d-ab08-304b6aa3ea9e-config\") pod \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\" (UID: \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\") " Feb 24 10:17:47 crc kubenswrapper[4755]: I0224 10:17:47.016202 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbbf8\" (UniqueName: \"kubernetes.io/projected/ab893943-e1e2-489d-ab08-304b6aa3ea9e-kube-api-access-jbbf8\") pod \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\" (UID: \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\") " Feb 24 10:17:47 crc kubenswrapper[4755]: I0224 10:17:47.016243 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab893943-e1e2-489d-ab08-304b6aa3ea9e-dns-svc\") pod \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\" (UID: \"ab893943-e1e2-489d-ab08-304b6aa3ea9e\") " Feb 24 10:17:47 crc kubenswrapper[4755]: I0224 10:17:47.016893 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab893943-e1e2-489d-ab08-304b6aa3ea9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab893943-e1e2-489d-ab08-304b6aa3ea9e" (UID: "ab893943-e1e2-489d-ab08-304b6aa3ea9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:17:47 crc kubenswrapper[4755]: I0224 10:17:47.017109 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab893943-e1e2-489d-ab08-304b6aa3ea9e-config" (OuterVolumeSpecName: "config") pod "ab893943-e1e2-489d-ab08-304b6aa3ea9e" (UID: "ab893943-e1e2-489d-ab08-304b6aa3ea9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:17:47 crc kubenswrapper[4755]: I0224 10:17:47.020085 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab893943-e1e2-489d-ab08-304b6aa3ea9e-kube-api-access-jbbf8" (OuterVolumeSpecName: "kube-api-access-jbbf8") pod "ab893943-e1e2-489d-ab08-304b6aa3ea9e" (UID: "ab893943-e1e2-489d-ab08-304b6aa3ea9e"). InnerVolumeSpecName "kube-api-access-jbbf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:17:47 crc kubenswrapper[4755]: I0224 10:17:47.118762 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbbf8\" (UniqueName: \"kubernetes.io/projected/ab893943-e1e2-489d-ab08-304b6aa3ea9e-kube-api-access-jbbf8\") on node \"crc\" DevicePath \"\"" Feb 24 10:17:47 crc kubenswrapper[4755]: I0224 10:17:47.118796 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab893943-e1e2-489d-ab08-304b6aa3ea9e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 10:17:47 crc kubenswrapper[4755]: I0224 10:17:47.118806 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab893943-e1e2-489d-ab08-304b6aa3ea9e-config\") on node \"crc\" DevicePath \"\"" Feb 24 10:17:47 crc kubenswrapper[4755]: I0224 10:17:47.492250 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" event={"ID":"ab893943-e1e2-489d-ab08-304b6aa3ea9e","Type":"ContainerDied","Data":"2651628a9b22db4a41b84c0ad72b06e3db1cbd95daecf9d3ac294a9492e4e01c"} Feb 24 10:17:47 crc kubenswrapper[4755]: I0224 10:17:47.492298 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-648df8f4f9-vnf58" Feb 24 10:17:47 crc kubenswrapper[4755]: I0224 10:17:47.504917 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"80f22bc9-819c-4a7d-bfc9-5032341c6b98","Type":"ContainerStarted","Data":"cecd648ac3a5c20d5c49aedea39697993bef1088ff551e4638231f94ff28ac6c"} Feb 24 10:17:47 crc kubenswrapper[4755]: I0224 10:17:47.563702 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-648df8f4f9-vnf58"] Feb 24 10:17:47 crc kubenswrapper[4755]: I0224 10:17:47.571360 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-648df8f4f9-vnf58"] Feb 24 10:17:48 crc kubenswrapper[4755]: I0224 10:17:48.327056 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2221632c-ed64-4067-b217-f34dc4ca3cfc" path="/var/lib/kubelet/pods/2221632c-ed64-4067-b217-f34dc4ca3cfc/volumes" Feb 24 10:17:48 crc kubenswrapper[4755]: I0224 10:17:48.327834 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab893943-e1e2-489d-ab08-304b6aa3ea9e" path="/var/lib/kubelet/pods/ab893943-e1e2-489d-ab08-304b6aa3ea9e/volumes" Feb 24 10:17:56 crc kubenswrapper[4755]: I0224 10:17:56.582524 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1e0fc0d5-6405-40e7-9203-0f937932b957","Type":"ContainerStarted","Data":"c80477701078921be3e0f8e35fee694b8d271a29798bd6e541801e1601f64ef8"} Feb 24 10:17:56 crc kubenswrapper[4755]: I0224 10:17:56.582802 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 24 10:17:56 crc kubenswrapper[4755]: I0224 10:17:56.586471 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c31ba439-10a5-48d6-abb0-0cf70ce01151","Type":"ContainerStarted","Data":"6254600876c2e0d5584c9d5678531966497be625b260eac0ec0ce1e256ff9dc0"} Feb 24 10:17:56 crc kubenswrapper[4755]: I0224 10:17:56.586564 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 24 10:17:56 crc kubenswrapper[4755]: I0224 10:17:56.589081 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-956vf" event={"ID":"43fb9b2a-5231-468f-972a-4bf96bbeaae4","Type":"ContainerStarted","Data":"c7b5b6136543d22788182c18161a824532db72c6230f96078b4d33632f2f7655"} Feb 24 10:17:56 crc kubenswrapper[4755]: I0224 10:17:56.591043 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"80f22bc9-819c-4a7d-bfc9-5032341c6b98","Type":"ContainerStarted","Data":"9a441379c8b6429174af2e13bc86eca1b35baa1e1bbfef3094d98b0607afbf61"} Feb 24 10:17:56 crc kubenswrapper[4755]: I0224 10:17:56.592305 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1a045a5b-47f3-4eca-964c-0703e8333c88","Type":"ContainerStarted","Data":"2c5393de2d120c3044064d88e94561fb711f27225c886f4e6014511f14e3dbb1"} Feb 24 10:17:56 crc kubenswrapper[4755]: I0224 10:17:56.593826 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerStarted","Data":"9dccce9f3f6933e27a3f3c76119b0ee67599fc6f30ed7d77edc9fe38bcc8ece3"} Feb 24 10:17:56 crc kubenswrapper[4755]: I0224 10:17:56.599445 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.330897038 podStartE2EDuration="23.599425361s" podCreationTimestamp="2026-02-24 10:17:33 +0000 UTC" firstStartedPulling="2026-02-24 10:17:45.992697308 +0000 UTC m=+1370.449219841" lastFinishedPulling="2026-02-24 10:17:56.261225621 +0000 UTC m=+1380.717748164" observedRunningTime="2026-02-24 10:17:56.59841714 +0000 UTC m=+1381.054939683" watchObservedRunningTime="2026-02-24 10:17:56.599425361 +0000 UTC m=+1381.055947914" Feb 24 10:17:56 crc kubenswrapper[4755]: I0224 10:17:56.599784 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerStarted","Data":"c6b04d15b4c4dfec76fb5ec23edd6b560f92806090ae261467053b5307d70b36"} Feb 24 10:17:56 crc kubenswrapper[4755]: I0224 10:17:56.603209 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7zjmh" event={"ID":"22e0eb37-c9e8-4e7c-a986-09c4c7700bd7","Type":"ContainerStarted","Data":"0401a1b791931084f136ed6c0b623648ee8af1efbfb85902130a7af1c48349a0"} Feb 24 10:17:56 crc kubenswrapper[4755]: I0224 10:17:56.603358 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-7zjmh" Feb 24 10:17:56 crc kubenswrapper[4755]: I0224 10:17:56.670270 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.064599125 podStartE2EDuration="25.670248168s" podCreationTimestamp="2026-02-24 10:17:31 +0000 UTC" firstStartedPulling="2026-02-24 10:17:45.97565381 +0000 UTC m=+1370.432176353" lastFinishedPulling="2026-02-24 10:17:55.581302813 +0000 UTC m=+1380.037825396" observedRunningTime="2026-02-24 10:17:56.664036445 +0000 UTC m=+1381.120558988" watchObservedRunningTime="2026-02-24 10:17:56.670248168 +0000 UTC m=+1381.126770721" Feb 24 10:17:56 crc kubenswrapper[4755]: I0224 10:17:56.751299 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7zjmh" podStartSLOduration=9.744757874 podStartE2EDuration="19.751278321s" podCreationTimestamp="2026-02-24 10:17:37 +0000 UTC" firstStartedPulling="2026-02-24 10:17:46.134873848 +0000 UTC m=+1370.591396391" lastFinishedPulling="2026-02-24 10:17:56.141394295 +0000 UTC m=+1380.597916838" observedRunningTime="2026-02-24 10:17:56.743046516 +0000 UTC m=+1381.199569059" watchObservedRunningTime="2026-02-24 10:17:56.751278321 +0000 UTC m=+1381.207800864" Feb 24 10:17:57 crc kubenswrapper[4755]: I0224 10:17:57.628212 4755 generic.go:334] "Generic (PLEG): container finished" podID="43fb9b2a-5231-468f-972a-4bf96bbeaae4" containerID="c7b5b6136543d22788182c18161a824532db72c6230f96078b4d33632f2f7655" exitCode=0 Feb 24 10:17:57 crc kubenswrapper[4755]: I0224 10:17:57.628400 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-956vf" event={"ID":"43fb9b2a-5231-468f-972a-4bf96bbeaae4","Type":"ContainerDied","Data":"c7b5b6136543d22788182c18161a824532db72c6230f96078b4d33632f2f7655"} Feb 24 10:17:58 crc kubenswrapper[4755]: I0224 10:17:58.645696 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"80f22bc9-819c-4a7d-bfc9-5032341c6b98","Type":"ContainerStarted","Data":"fefb10260fc1cdc18d84d2c03e7d2c6e19621f3eb3ae6205e9703d32657f7290"} Feb 24 10:17:58 crc kubenswrapper[4755]: I0224 10:17:58.651382 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1a045a5b-47f3-4eca-964c-0703e8333c88","Type":"ContainerStarted","Data":"77425009415e551df89f35895f076d8d6ff9176ea6f79753833c8e35bd3beb6b"} Feb 24 10:17:58 crc kubenswrapper[4755]: I0224 10:17:58.653318 4755 generic.go:334] "Generic (PLEG): container finished" podID="7872be5b-22c5-441e-8b92-fccc79705037" containerID="5618aeeb59391d0b5a5c9c8db7e674518fe970371f1551421d56f9752cc1bc08" exitCode=0 Feb 24 10:17:58 crc kubenswrapper[4755]: I0224 10:17:58.653386 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" event={"ID":"7872be5b-22c5-441e-8b92-fccc79705037","Type":"ContainerDied","Data":"5618aeeb59391d0b5a5c9c8db7e674518fe970371f1551421d56f9752cc1bc08"} Feb 24 10:17:58 crc kubenswrapper[4755]: I0224 10:17:58.659296 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-956vf" event={"ID":"43fb9b2a-5231-468f-972a-4bf96bbeaae4","Type":"ContainerStarted","Data":"0d62c94e94662b864573c2f5c0ff7256d71b7c56bae7497dbf6acd7fd2a5badf"} Feb 24 10:17:58 crc kubenswrapper[4755]: I0224 10:17:58.681104 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.276136606 podStartE2EDuration="23.681081887s" podCreationTimestamp="2026-02-24 10:17:35 +0000 UTC" firstStartedPulling="2026-02-24 10:17:46.700969856 +0000 UTC m=+1371.157492399" lastFinishedPulling="2026-02-24 10:17:58.105915107 +0000 UTC m=+1382.562437680" observedRunningTime="2026-02-24 10:17:58.680562271 +0000 UTC m=+1383.137084884" watchObservedRunningTime="2026-02-24 10:17:58.681081887 +0000 UTC m=+1383.137604440" Feb 24 10:17:58 crc kubenswrapper[4755]: I0224 10:17:58.715695 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.690993917 podStartE2EDuration="19.71567328s" podCreationTimestamp="2026-02-24 10:17:39 +0000 UTC" firstStartedPulling="2026-02-24 10:17:46.09881364 +0000 UTC m=+1370.555336183" lastFinishedPulling="2026-02-24 10:17:58.123493003 +0000 UTC m=+1382.580015546" observedRunningTime="2026-02-24 10:17:58.707267139 +0000 UTC m=+1383.163789722" watchObservedRunningTime="2026-02-24 10:17:58.71567328 +0000 UTC m=+1383.172195833" Feb 24 10:17:59 crc kubenswrapper[4755]: I0224 10:17:59.221847 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:59 crc kubenswrapper[4755]: I0224 10:17:59.301967 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:59 crc kubenswrapper[4755]: I0224 10:17:59.674804 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-956vf" event={"ID":"43fb9b2a-5231-468f-972a-4bf96bbeaae4","Type":"ContainerStarted","Data":"2b5a5cf5c16da5af7049bfd21fdcbc57279195111641f3d235283dc5f8d5967d"} Feb 24 10:17:59 crc kubenswrapper[4755]: I0224 10:17:59.675140 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:59 crc kubenswrapper[4755]: I0224 10:17:59.675403 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:17:59 crc kubenswrapper[4755]: I0224 10:17:59.677382 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" event={"ID":"7872be5b-22c5-441e-8b92-fccc79705037","Type":"ContainerStarted","Data":"09ec6bae85e350eefb342c65d211ec820512e27c24c92f0492e53af6d353c324"} Feb 24 10:17:59 crc kubenswrapper[4755]: I0224 10:17:59.678214 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 24 10:17:59 crc kubenswrapper[4755]: I0224 10:17:59.709661 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-956vf" podStartSLOduration=13.218726075 podStartE2EDuration="22.709640719s" podCreationTimestamp="2026-02-24 10:17:37 +0000 UTC" firstStartedPulling="2026-02-24 10:17:46.194655673 +0000 UTC m=+1370.651178216" lastFinishedPulling="2026-02-24 10:17:55.685570287 +0000 UTC m=+1380.142092860" observedRunningTime="2026-02-24 10:17:59.708917187 +0000 UTC m=+1384.165439780" watchObservedRunningTime="2026-02-24 10:17:59.709640719 +0000 UTC m=+1384.166163272" Feb 24 10:17:59 crc kubenswrapper[4755]: I0224 10:17:59.746451 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" podStartSLOduration=3.428711234 podStartE2EDuration="33.74641279s" podCreationTimestamp="2026-02-24 10:17:26 +0000 UTC" firstStartedPulling="2026-02-24 10:17:27.798430988 +0000 UTC m=+1352.254953521" lastFinishedPulling="2026-02-24 10:17:58.116132534 +0000 UTC m=+1382.572655077" observedRunningTime="2026-02-24 10:17:59.738619348 +0000 UTC m=+1384.195141901" watchObservedRunningTime="2026-02-24 10:17:59.74641279 +0000 UTC m=+1384.202935383" Feb 24 10:18:00 crc kubenswrapper[4755]: I0224 10:18:00.691307 4755 generic.go:334] "Generic (PLEG): container finished" podID="2f320527-691f-48e9-a243-f60bc805da39" containerID="9dccce9f3f6933e27a3f3c76119b0ee67599fc6f30ed7d77edc9fe38bcc8ece3" exitCode=0 Feb 24 10:18:00 crc kubenswrapper[4755]: I0224 10:18:00.691438 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerDied","Data":"9dccce9f3f6933e27a3f3c76119b0ee67599fc6f30ed7d77edc9fe38bcc8ece3"} Feb 24 10:18:00 crc kubenswrapper[4755]: I0224 10:18:00.696323 4755 generic.go:334] "Generic (PLEG): container finished" podID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerID="c6b04d15b4c4dfec76fb5ec23edd6b560f92806090ae261467053b5307d70b36" exitCode=0 Feb 24 10:18:00 crc kubenswrapper[4755]: I0224 10:18:00.696411 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerDied","Data":"c6b04d15b4c4dfec76fb5ec23edd6b560f92806090ae261467053b5307d70b36"} Feb 24 10:18:00 crc kubenswrapper[4755]: I0224 10:18:00.701489 4755 generic.go:334] "Generic (PLEG): container finished" podID="d59fa737-51c8-412d-b6c5-150038e26abb" containerID="05c92824ea6c4412b6f9a832fb8c9d7dbf0396cc6c186986eb7fe3f6bf95017b" exitCode=0 Feb 24 10:18:00 crc kubenswrapper[4755]: I0224 10:18:00.701593 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" event={"ID":"d59fa737-51c8-412d-b6c5-150038e26abb","Type":"ContainerDied","Data":"05c92824ea6c4412b6f9a832fb8c9d7dbf0396cc6c186986eb7fe3f6bf95017b"} Feb 24 10:18:00 crc kubenswrapper[4755]: I0224 10:18:00.923307 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 24 10:18:00 crc kubenswrapper[4755]: I0224 10:18:00.971724 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.275525 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.588978 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9fb6c8b7-sdps9"] Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.591052 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.615487 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77c77c9f89-tggz4"] Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.616841 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.618830 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.637726 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77c77c9f89-tggz4"] Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.673791 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-ovsdbserver-sb\") pod \"dnsmasq-dns-77c77c9f89-tggz4\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.673861 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-dns-svc\") pod \"dnsmasq-dns-77c77c9f89-tggz4\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.674057 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-config\") pod \"dnsmasq-dns-77c77c9f89-tggz4\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.674349 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq2mk\" (UniqueName: \"kubernetes.io/projected/a75a7d6b-7798-43f7-86b4-539d9df80312-kube-api-access-kq2mk\") pod \"dnsmasq-dns-77c77c9f89-tggz4\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.705791 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-2wlpg"] Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.706679 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.709374 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.713748 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" event={"ID":"d59fa737-51c8-412d-b6c5-150038e26abb","Type":"ContainerStarted","Data":"450df0e217c83e31403c618a688d31a0cc8ab35e01beaa2ac7d7b8a9e7dad3d9"} Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.714137 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" podUID="d59fa737-51c8-412d-b6c5-150038e26abb" containerName="dnsmasq-dns" containerID="cri-o://450df0e217c83e31403c618a688d31a0cc8ab35e01beaa2ac7d7b8a9e7dad3d9" gracePeriod=10 Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.716698 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerStarted","Data":"2e538d3c9647af0ce3e2cda7edd5083e55d8a51afe1081319c126ae17e558431"} Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.720880 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2wlpg"] Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.726466 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerStarted","Data":"c23b00c172415a5d4404660869d9355bcca364f8424ce1b7961a9b52aedf43de"} Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.726891 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.777022 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-dns-svc\") pod \"dnsmasq-dns-77c77c9f89-tggz4\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.777152 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0acfb5-ec04-4ca5-aabd-b230dab45121-config\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.777175 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0acfb5-ec04-4ca5-aabd-b230dab45121-combined-ca-bundle\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.777193 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0b0acfb5-ec04-4ca5-aabd-b230dab45121-ovn-rundir\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.777227 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-config\") pod \"dnsmasq-dns-77c77c9f89-tggz4\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.777309 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0b0acfb5-ec04-4ca5-aabd-b230dab45121-ovs-rundir\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.777355 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b0acfb5-ec04-4ca5-aabd-b230dab45121-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.777389 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnbh8\" (UniqueName: \"kubernetes.io/projected/0b0acfb5-ec04-4ca5-aabd-b230dab45121-kube-api-access-gnbh8\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.777404 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq2mk\" (UniqueName: \"kubernetes.io/projected/a75a7d6b-7798-43f7-86b4-539d9df80312-kube-api-access-kq2mk\") pod \"dnsmasq-dns-77c77c9f89-tggz4\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.777449 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-ovsdbserver-sb\") pod \"dnsmasq-dns-77c77c9f89-tggz4\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.778806 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-dns-svc\") pod \"dnsmasq-dns-77c77c9f89-tggz4\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.778847 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=21.730116226 podStartE2EDuration="32.778830528s" podCreationTimestamp="2026-02-24 10:17:29 +0000 UTC" firstStartedPulling="2026-02-24 10:17:44.636991719 +0000 UTC m=+1369.093514302" lastFinishedPulling="2026-02-24 10:17:55.685706021 +0000 UTC m=+1380.142228604" observedRunningTime="2026-02-24 10:18:01.758419845 +0000 UTC m=+1386.214942408" watchObservedRunningTime="2026-02-24 10:18:01.778830528 +0000 UTC m=+1386.235353071" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.781160 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-config\") pod \"dnsmasq-dns-77c77c9f89-tggz4\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.786717 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-ovsdbserver-sb\") pod \"dnsmasq-dns-77c77c9f89-tggz4\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.786956 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.808707 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq2mk\" (UniqueName: \"kubernetes.io/projected/a75a7d6b-7798-43f7-86b4-539d9df80312-kube-api-access-kq2mk\") pod \"dnsmasq-dns-77c77c9f89-tggz4\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.811742 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.472414029 podStartE2EDuration="33.811723138s" podCreationTimestamp="2026-02-24 10:17:28 +0000 UTC" firstStartedPulling="2026-02-24 10:17:45.800216218 +0000 UTC m=+1370.256738761" lastFinishedPulling="2026-02-24 10:17:56.139525307 +0000 UTC m=+1380.596047870" observedRunningTime="2026-02-24 10:18:01.808041104 +0000 UTC m=+1386.264563637" watchObservedRunningTime="2026-02-24 10:18:01.811723138 +0000 UTC m=+1386.268245681" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.812124 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" podStartSLOduration=-9223372001.042658 podStartE2EDuration="35.81211799s" podCreationTimestamp="2026-02-24 10:17:26 +0000 UTC" firstStartedPulling="2026-02-24 10:17:26.892451477 +0000 UTC m=+1351.348974020" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:18:01.787366123 +0000 UTC m=+1386.243888666" watchObservedRunningTime="2026-02-24 10:18:01.81211799 +0000 UTC m=+1386.268640533" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.877977 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dfb8ff55f-z5jxd"] Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.878185 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" podUID="7872be5b-22c5-441e-8b92-fccc79705037" containerName="dnsmasq-dns" containerID="cri-o://09ec6bae85e350eefb342c65d211ec820512e27c24c92f0492e53af6d353c324" gracePeriod=10 Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.878491 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.878793 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0acfb5-ec04-4ca5-aabd-b230dab45121-combined-ca-bundle\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.878860 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0b0acfb5-ec04-4ca5-aabd-b230dab45121-ovn-rundir\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.878920 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0b0acfb5-ec04-4ca5-aabd-b230dab45121-ovs-rundir\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.878961 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b0acfb5-ec04-4ca5-aabd-b230dab45121-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.878983 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnbh8\" (UniqueName: \"kubernetes.io/projected/0b0acfb5-ec04-4ca5-aabd-b230dab45121-kube-api-access-gnbh8\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.879091 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0acfb5-ec04-4ca5-aabd-b230dab45121-config\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.879721 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0acfb5-ec04-4ca5-aabd-b230dab45121-config\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.880090 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0b0acfb5-ec04-4ca5-aabd-b230dab45121-ovn-rundir\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.880466 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0b0acfb5-ec04-4ca5-aabd-b230dab45121-ovs-rundir\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.888466 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b0acfb5-ec04-4ca5-aabd-b230dab45121-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.897121 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0acfb5-ec04-4ca5-aabd-b230dab45121-combined-ca-bundle\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.899491 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnbh8\" (UniqueName: \"kubernetes.io/projected/0b0acfb5-ec04-4ca5-aabd-b230dab45121-kube-api-access-gnbh8\") pod \"ovn-controller-metrics-2wlpg\" (UID: \"0b0acfb5-ec04-4ca5-aabd-b230dab45121\") " pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.928143 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f9bcc5599-rpkkn"] Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.929407 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.931930 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.932473 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.962805 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f9bcc5599-rpkkn"] Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.985357 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsk9f\" (UniqueName: \"kubernetes.io/projected/2364c725-9de7-41e3-8150-a73f93090496-kube-api-access-vsk9f\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.985432 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-ovsdbserver-sb\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.985670 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-dns-svc\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.985721 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-config\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:01 crc kubenswrapper[4755]: I0224 10:18:01.985741 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-ovsdbserver-nb\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.010510 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40430: no serving certificate available for the kubelet" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.020348 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2wlpg" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.087172 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-dns-svc\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.087483 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-config\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.087510 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-ovsdbserver-nb\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.087582 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsk9f\" (UniqueName: \"kubernetes.io/projected/2364c725-9de7-41e3-8150-a73f93090496-kube-api-access-vsk9f\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.087608 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-ovsdbserver-sb\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.089578 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-config\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.089761 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-ovsdbserver-sb\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.094921 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-dns-svc\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.095635 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-ovsdbserver-nb\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.111899 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsk9f\" (UniqueName: \"kubernetes.io/projected/2364c725-9de7-41e3-8150-a73f93090496-kube-api-access-vsk9f\") pod \"dnsmasq-dns-6f9bcc5599-rpkkn\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.161960 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.164936 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.168382 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.168583 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-c92q4" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.168729 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.168827 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.174902 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.190415 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.190461 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.190514 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-scripts\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.190559 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.190604 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-config\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.190628 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.190667 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cslfh\" (UniqueName: \"kubernetes.io/projected/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-kube-api-access-cslfh\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.208637 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.292356 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlz8n\" (UniqueName: \"kubernetes.io/projected/d59fa737-51c8-412d-b6c5-150038e26abb-kube-api-access-nlz8n\") pod \"d59fa737-51c8-412d-b6c5-150038e26abb\" (UID: \"d59fa737-51c8-412d-b6c5-150038e26abb\") " Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.292422 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d59fa737-51c8-412d-b6c5-150038e26abb-config\") pod \"d59fa737-51c8-412d-b6c5-150038e26abb\" (UID: \"d59fa737-51c8-412d-b6c5-150038e26abb\") " Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.292512 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d59fa737-51c8-412d-b6c5-150038e26abb-dns-svc\") pod \"d59fa737-51c8-412d-b6c5-150038e26abb\" (UID: \"d59fa737-51c8-412d-b6c5-150038e26abb\") " Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.292845 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cslfh\" (UniqueName: \"kubernetes.io/projected/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-kube-api-access-cslfh\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.292895 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.292911 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.292976 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-scripts\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.293031 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.293094 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-config\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.293119 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.293660 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.294677 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-scripts\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.296168 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d59fa737-51c8-412d-b6c5-150038e26abb-kube-api-access-nlz8n" (OuterVolumeSpecName: "kube-api-access-nlz8n") pod "d59fa737-51c8-412d-b6c5-150038e26abb" (UID: "d59fa737-51c8-412d-b6c5-150038e26abb"). InnerVolumeSpecName "kube-api-access-nlz8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.296234 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-config\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.299293 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.301866 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.303013 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.312218 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cslfh\" (UniqueName: \"kubernetes.io/projected/0b7bd67c-3461-45d4-b661-d2df2e35b2f8-kube-api-access-cslfh\") pod \"ovn-northd-0\" (UID: \"0b7bd67c-3461-45d4-b661-d2df2e35b2f8\") " pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.314209 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.370106 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d59fa737-51c8-412d-b6c5-150038e26abb-config" (OuterVolumeSpecName: "config") pod "d59fa737-51c8-412d-b6c5-150038e26abb" (UID: "d59fa737-51c8-412d-b6c5-150038e26abb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.379475 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d59fa737-51c8-412d-b6c5-150038e26abb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d59fa737-51c8-412d-b6c5-150038e26abb" (UID: "d59fa737-51c8-412d-b6c5-150038e26abb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.394100 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d59fa737-51c8-412d-b6c5-150038e26abb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.394131 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlz8n\" (UniqueName: \"kubernetes.io/projected/d59fa737-51c8-412d-b6c5-150038e26abb-kube-api-access-nlz8n\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.394141 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d59fa737-51c8-412d-b6c5-150038e26abb-config\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.446950 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.495544 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7872be5b-22c5-441e-8b92-fccc79705037-dns-svc\") pod \"7872be5b-22c5-441e-8b92-fccc79705037\" (UID: \"7872be5b-22c5-441e-8b92-fccc79705037\") " Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.495618 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7872be5b-22c5-441e-8b92-fccc79705037-config\") pod \"7872be5b-22c5-441e-8b92-fccc79705037\" (UID: \"7872be5b-22c5-441e-8b92-fccc79705037\") " Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.495707 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr6dk\" (UniqueName: \"kubernetes.io/projected/7872be5b-22c5-441e-8b92-fccc79705037-kube-api-access-wr6dk\") pod \"7872be5b-22c5-441e-8b92-fccc79705037\" (UID: \"7872be5b-22c5-441e-8b92-fccc79705037\") " Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.496856 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.522319 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7872be5b-22c5-441e-8b92-fccc79705037-kube-api-access-wr6dk" (OuterVolumeSpecName: "kube-api-access-wr6dk") pod "7872be5b-22c5-441e-8b92-fccc79705037" (UID: "7872be5b-22c5-441e-8b92-fccc79705037"). InnerVolumeSpecName "kube-api-access-wr6dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.548657 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7872be5b-22c5-441e-8b92-fccc79705037-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7872be5b-22c5-441e-8b92-fccc79705037" (UID: "7872be5b-22c5-441e-8b92-fccc79705037"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.558788 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7872be5b-22c5-441e-8b92-fccc79705037-config" (OuterVolumeSpecName: "config") pod "7872be5b-22c5-441e-8b92-fccc79705037" (UID: "7872be5b-22c5-441e-8b92-fccc79705037"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.570266 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77c77c9f89-tggz4"] Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.593255 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f9bcc5599-rpkkn"] Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.606162 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7872be5b-22c5-441e-8b92-fccc79705037-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.606275 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7872be5b-22c5-441e-8b92-fccc79705037-config\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.606350 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr6dk\" (UniqueName: \"kubernetes.io/projected/7872be5b-22c5-441e-8b92-fccc79705037-kube-api-access-wr6dk\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.648786 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2wlpg"] Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.744412 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" event={"ID":"2364c725-9de7-41e3-8150-a73f93090496","Type":"ContainerStarted","Data":"3b32120a36d7567f650e66e7cb9969e979fc7edb731d40324f74624d9505cae8"} Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.745667 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" event={"ID":"a75a7d6b-7798-43f7-86b4-539d9df80312","Type":"ContainerStarted","Data":"c95e58962fdc71c85c092c97746f447dc45e988e449dc698beee50d3fa7312f9"} Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.747866 4755 generic.go:334] "Generic (PLEG): container finished" podID="7872be5b-22c5-441e-8b92-fccc79705037" containerID="09ec6bae85e350eefb342c65d211ec820512e27c24c92f0492e53af6d353c324" exitCode=0 Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.748094 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" event={"ID":"7872be5b-22c5-441e-8b92-fccc79705037","Type":"ContainerDied","Data":"09ec6bae85e350eefb342c65d211ec820512e27c24c92f0492e53af6d353c324"} Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.748189 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.748208 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dfb8ff55f-z5jxd" event={"ID":"7872be5b-22c5-441e-8b92-fccc79705037","Type":"ContainerDied","Data":"741b5139bc2d2b654c8bce4692804117c4b5b1a7ddf8e07f1bb52aaf90bc766f"} Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.748233 4755 scope.go:117] "RemoveContainer" containerID="09ec6bae85e350eefb342c65d211ec820512e27c24c92f0492e53af6d353c324" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.752337 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2wlpg" event={"ID":"0b0acfb5-ec04-4ca5-aabd-b230dab45121","Type":"ContainerStarted","Data":"f18442296f97fdcb3b9b204a596be8d06ad8bf3f2810f63f73c81ec612f99425"} Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.769436 4755 generic.go:334] "Generic (PLEG): container finished" podID="d59fa737-51c8-412d-b6c5-150038e26abb" containerID="450df0e217c83e31403c618a688d31a0cc8ab35e01beaa2ac7d7b8a9e7dad3d9" exitCode=0 Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.769516 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" event={"ID":"d59fa737-51c8-412d-b6c5-150038e26abb","Type":"ContainerDied","Data":"450df0e217c83e31403c618a688d31a0cc8ab35e01beaa2ac7d7b8a9e7dad3d9"} Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.769574 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" event={"ID":"d59fa737-51c8-412d-b6c5-150038e26abb","Type":"ContainerDied","Data":"e845c1e4a523f050d62ab4e09b523ae43c57c208fb681e01ea0ae1f7e0d0432b"} Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.770307 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fb6c8b7-sdps9" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.779217 4755 scope.go:117] "RemoveContainer" containerID="5618aeeb59391d0b5a5c9c8db7e674518fe970371f1551421d56f9752cc1bc08" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.790624 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dfb8ff55f-z5jxd"] Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.807632 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dfb8ff55f-z5jxd"] Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.818162 4755 scope.go:117] "RemoveContainer" containerID="09ec6bae85e350eefb342c65d211ec820512e27c24c92f0492e53af6d353c324" Feb 24 10:18:02 crc kubenswrapper[4755]: E0224 10:18:02.818552 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09ec6bae85e350eefb342c65d211ec820512e27c24c92f0492e53af6d353c324\": container with ID starting with 09ec6bae85e350eefb342c65d211ec820512e27c24c92f0492e53af6d353c324 not found: ID does not exist" containerID="09ec6bae85e350eefb342c65d211ec820512e27c24c92f0492e53af6d353c324" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.818591 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09ec6bae85e350eefb342c65d211ec820512e27c24c92f0492e53af6d353c324"} err="failed to get container status \"09ec6bae85e350eefb342c65d211ec820512e27c24c92f0492e53af6d353c324\": rpc error: code = NotFound desc = could not find container \"09ec6bae85e350eefb342c65d211ec820512e27c24c92f0492e53af6d353c324\": container with ID starting with 09ec6bae85e350eefb342c65d211ec820512e27c24c92f0492e53af6d353c324 not found: ID does not exist" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.818620 4755 scope.go:117] "RemoveContainer" containerID="5618aeeb59391d0b5a5c9c8db7e674518fe970371f1551421d56f9752cc1bc08" Feb 24 10:18:02 crc kubenswrapper[4755]: E0224 10:18:02.818982 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5618aeeb59391d0b5a5c9c8db7e674518fe970371f1551421d56f9752cc1bc08\": container with ID starting with 5618aeeb59391d0b5a5c9c8db7e674518fe970371f1551421d56f9752cc1bc08 not found: ID does not exist" containerID="5618aeeb59391d0b5a5c9c8db7e674518fe970371f1551421d56f9752cc1bc08" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.819004 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5618aeeb59391d0b5a5c9c8db7e674518fe970371f1551421d56f9752cc1bc08"} err="failed to get container status \"5618aeeb59391d0b5a5c9c8db7e674518fe970371f1551421d56f9752cc1bc08\": rpc error: code = NotFound desc = could not find container \"5618aeeb59391d0b5a5c9c8db7e674518fe970371f1551421d56f9752cc1bc08\": container with ID starting with 5618aeeb59391d0b5a5c9c8db7e674518fe970371f1551421d56f9752cc1bc08 not found: ID does not exist" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.819022 4755 scope.go:117] "RemoveContainer" containerID="450df0e217c83e31403c618a688d31a0cc8ab35e01beaa2ac7d7b8a9e7dad3d9" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.819144 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9fb6c8b7-sdps9"] Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.827410 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9fb6c8b7-sdps9"] Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.931173 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.948513 4755 scope.go:117] "RemoveContainer" containerID="05c92824ea6c4412b6f9a832fb8c9d7dbf0396cc6c186986eb7fe3f6bf95017b" Feb 24 10:18:02 crc kubenswrapper[4755]: W0224 10:18:02.963343 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b7bd67c_3461_45d4_b661_d2df2e35b2f8.slice/crio-1b65360289dab053fa37e36c89da7a14d485673eaa5e7ba36e369a65ef8c8523 WatchSource:0}: Error finding container 1b65360289dab053fa37e36c89da7a14d485673eaa5e7ba36e369a65ef8c8523: Status 404 returned error can't find the container with id 1b65360289dab053fa37e36c89da7a14d485673eaa5e7ba36e369a65ef8c8523 Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.977640 4755 scope.go:117] "RemoveContainer" containerID="450df0e217c83e31403c618a688d31a0cc8ab35e01beaa2ac7d7b8a9e7dad3d9" Feb 24 10:18:02 crc kubenswrapper[4755]: E0224 10:18:02.977945 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450df0e217c83e31403c618a688d31a0cc8ab35e01beaa2ac7d7b8a9e7dad3d9\": container with ID starting with 450df0e217c83e31403c618a688d31a0cc8ab35e01beaa2ac7d7b8a9e7dad3d9 not found: ID does not exist" containerID="450df0e217c83e31403c618a688d31a0cc8ab35e01beaa2ac7d7b8a9e7dad3d9" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.977970 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450df0e217c83e31403c618a688d31a0cc8ab35e01beaa2ac7d7b8a9e7dad3d9"} err="failed to get container status \"450df0e217c83e31403c618a688d31a0cc8ab35e01beaa2ac7d7b8a9e7dad3d9\": rpc error: code = NotFound desc = could not find container \"450df0e217c83e31403c618a688d31a0cc8ab35e01beaa2ac7d7b8a9e7dad3d9\": container with ID starting with 450df0e217c83e31403c618a688d31a0cc8ab35e01beaa2ac7d7b8a9e7dad3d9 not found: ID does not exist" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.977988 4755 scope.go:117] "RemoveContainer" containerID="05c92824ea6c4412b6f9a832fb8c9d7dbf0396cc6c186986eb7fe3f6bf95017b" Feb 24 10:18:02 crc kubenswrapper[4755]: E0224 10:18:02.978439 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05c92824ea6c4412b6f9a832fb8c9d7dbf0396cc6c186986eb7fe3f6bf95017b\": container with ID starting with 05c92824ea6c4412b6f9a832fb8c9d7dbf0396cc6c186986eb7fe3f6bf95017b not found: ID does not exist" containerID="05c92824ea6c4412b6f9a832fb8c9d7dbf0396cc6c186986eb7fe3f6bf95017b" Feb 24 10:18:02 crc kubenswrapper[4755]: I0224 10:18:02.978462 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05c92824ea6c4412b6f9a832fb8c9d7dbf0396cc6c186986eb7fe3f6bf95017b"} err="failed to get container status \"05c92824ea6c4412b6f9a832fb8c9d7dbf0396cc6c186986eb7fe3f6bf95017b\": rpc error: code = NotFound desc = could not find container \"05c92824ea6c4412b6f9a832fb8c9d7dbf0396cc6c186986eb7fe3f6bf95017b\": container with ID starting with 05c92824ea6c4412b6f9a832fb8c9d7dbf0396cc6c186986eb7fe3f6bf95017b not found: ID does not exist" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.256923 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40446: no serving certificate available for the kubelet" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.617460 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f9bcc5599-rpkkn"] Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.677462 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77cf9b784c-lmqn8"] Feb 24 10:18:03 crc kubenswrapper[4755]: E0224 10:18:03.677750 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59fa737-51c8-412d-b6c5-150038e26abb" containerName="dnsmasq-dns" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.677772 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59fa737-51c8-412d-b6c5-150038e26abb" containerName="dnsmasq-dns" Feb 24 10:18:03 crc kubenswrapper[4755]: E0224 10:18:03.677791 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7872be5b-22c5-441e-8b92-fccc79705037" containerName="dnsmasq-dns" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.677797 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="7872be5b-22c5-441e-8b92-fccc79705037" containerName="dnsmasq-dns" Feb 24 10:18:03 crc kubenswrapper[4755]: E0224 10:18:03.677809 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7872be5b-22c5-441e-8b92-fccc79705037" containerName="init" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.677815 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="7872be5b-22c5-441e-8b92-fccc79705037" containerName="init" Feb 24 10:18:03 crc kubenswrapper[4755]: E0224 10:18:03.677836 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59fa737-51c8-412d-b6c5-150038e26abb" containerName="init" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.677842 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59fa737-51c8-412d-b6c5-150038e26abb" containerName="init" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.677972 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="7872be5b-22c5-441e-8b92-fccc79705037" containerName="dnsmasq-dns" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.677988 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59fa737-51c8-412d-b6c5-150038e26abb" containerName="dnsmasq-dns" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.678797 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.706714 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77cf9b784c-lmqn8"] Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.741480 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-dns-svc\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.741731 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-ovsdbserver-nb\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.741857 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nfbh\" (UniqueName: \"kubernetes.io/projected/667faef0-2cfe-417b-9872-a36c104256aa-kube-api-access-8nfbh\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.741969 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-config\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.742155 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-ovsdbserver-sb\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.788555 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.790476 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0b7bd67c-3461-45d4-b661-d2df2e35b2f8","Type":"ContainerStarted","Data":"1b65360289dab053fa37e36c89da7a14d485673eaa5e7ba36e369a65ef8c8523"} Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.792851 4755 generic.go:334] "Generic (PLEG): container finished" podID="2364c725-9de7-41e3-8150-a73f93090496" containerID="8a28f8598b9bada23134fb4cdc2a659ef07db90ddcb1eec357a80fd7a5dbbd2a" exitCode=0 Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.792904 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" event={"ID":"2364c725-9de7-41e3-8150-a73f93090496","Type":"ContainerDied","Data":"8a28f8598b9bada23134fb4cdc2a659ef07db90ddcb1eec357a80fd7a5dbbd2a"} Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.794159 4755 generic.go:334] "Generic (PLEG): container finished" podID="a75a7d6b-7798-43f7-86b4-539d9df80312" containerID="021bd1d528cfd0d025d61b5e72c1d1aba1e5d74198ae355b746b2b09b0751631" exitCode=0 Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.794365 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" event={"ID":"a75a7d6b-7798-43f7-86b4-539d9df80312","Type":"ContainerDied","Data":"021bd1d528cfd0d025d61b5e72c1d1aba1e5d74198ae355b746b2b09b0751631"} Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.799167 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2wlpg" event={"ID":"0b0acfb5-ec04-4ca5-aabd-b230dab45121","Type":"ContainerStarted","Data":"a35104ec0eb4e2f913ddf8c339904995699618b3ab9ac8973a4d75331d1362a9"} Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.845308 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nfbh\" (UniqueName: \"kubernetes.io/projected/667faef0-2cfe-417b-9872-a36c104256aa-kube-api-access-8nfbh\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.845378 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-config\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.845628 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-ovsdbserver-sb\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.845664 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-dns-svc\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.845706 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-ovsdbserver-nb\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.849152 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-dns-svc\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.849283 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-ovsdbserver-sb\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.849422 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-config\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.850121 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-ovsdbserver-nb\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.887141 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nfbh\" (UniqueName: \"kubernetes.io/projected/667faef0-2cfe-417b-9872-a36c104256aa-kube-api-access-8nfbh\") pod \"dnsmasq-dns-77cf9b784c-lmqn8\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.934060 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-2wlpg" podStartSLOduration=2.934037415 podStartE2EDuration="2.934037415s" podCreationTimestamp="2026-02-24 10:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:18:03.857688617 +0000 UTC m=+1388.314211160" watchObservedRunningTime="2026-02-24 10:18:03.934037415 +0000 UTC m=+1388.390559958" Feb 24 10:18:03 crc kubenswrapper[4755]: I0224 10:18:03.992957 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:04 crc kubenswrapper[4755]: E0224 10:18:04.278581 4755 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 24 10:18:04 crc kubenswrapper[4755]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/2364c725-9de7-41e3-8150-a73f93090496/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 24 10:18:04 crc kubenswrapper[4755]: > podSandboxID="3b32120a36d7567f650e66e7cb9969e979fc7edb731d40324f74624d9505cae8" Feb 24 10:18:04 crc kubenswrapper[4755]: E0224 10:18:04.278977 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 10:18:04 crc kubenswrapper[4755]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n654h99h64ch5dbh6dh555h587h64bh5cfh647h5fdh57ch679h9h597h5f5hbch59bh54fh575h566h667h586h5f5h65ch5bch57h68h65ch58bh694h5cfq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vsk9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6f9bcc5599-rpkkn_openstack(2364c725-9de7-41e3-8150-a73f93090496): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/2364c725-9de7-41e3-8150-a73f93090496/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 24 10:18:04 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 10:18:04 crc kubenswrapper[4755]: E0224 10:18:04.280357 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/2364c725-9de7-41e3-8150-a73f93090496/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" podUID="2364c725-9de7-41e3-8150-a73f93090496" Feb 24 10:18:04 crc kubenswrapper[4755]: E0224 10:18:04.297701 4755 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 24 10:18:04 crc kubenswrapper[4755]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/a75a7d6b-7798-43f7-86b4-539d9df80312/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 24 10:18:04 crc kubenswrapper[4755]: > podSandboxID="c95e58962fdc71c85c092c97746f447dc45e988e449dc698beee50d3fa7312f9" Feb 24 10:18:04 crc kubenswrapper[4755]: E0224 10:18:04.297834 4755 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 10:18:04 crc kubenswrapper[4755]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8ch647h5fdh676h5c8h566h96h5d8hdh569h64dh5b5h587h55h5cch58dh658h67h5f6h64fh648h6h59fh65ch7hf9hf6h74hf8hch596h5b8q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kq2mk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-77c77c9f89-tggz4_openstack(a75a7d6b-7798-43f7-86b4-539d9df80312): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/a75a7d6b-7798-43f7-86b4-539d9df80312/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 24 10:18:04 crc kubenswrapper[4755]: > logger="UnhandledError" Feb 24 10:18:04 crc kubenswrapper[4755]: E0224 10:18:04.299136 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/a75a7d6b-7798-43f7-86b4-539d9df80312/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" podUID="a75a7d6b-7798-43f7-86b4-539d9df80312" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.324850 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7872be5b-22c5-441e-8b92-fccc79705037" path="/var/lib/kubelet/pods/7872be5b-22c5-441e-8b92-fccc79705037/volumes" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.325673 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d59fa737-51c8-412d-b6c5-150038e26abb" path="/var/lib/kubelet/pods/d59fa737-51c8-412d-b6c5-150038e26abb/volumes" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.772318 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77cf9b784c-lmqn8"] Feb 24 10:18:04 crc kubenswrapper[4755]: W0224 10:18:04.777330 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod667faef0_2cfe_417b_9872_a36c104256aa.slice/crio-d0b8262c5dc8a8fc34593393d66d46f03ecd824c0fefb5f38ab9ffe05dbafbcd WatchSource:0}: Error finding container d0b8262c5dc8a8fc34593393d66d46f03ecd824c0fefb5f38ab9ffe05dbafbcd: Status 404 returned error can't find the container with id d0b8262c5dc8a8fc34593393d66d46f03ecd824c0fefb5f38ab9ffe05dbafbcd Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.797664 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.804308 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.806484 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.806691 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.806838 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.807522 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-64q59" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.822611 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" event={"ID":"667faef0-2cfe-417b-9872-a36c104256aa","Type":"ContainerStarted","Data":"d0b8262c5dc8a8fc34593393d66d46f03ecd824c0fefb5f38ab9ffe05dbafbcd"} Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.823444 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.839791 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0b7bd67c-3461-45d4-b661-d2df2e35b2f8","Type":"ContainerStarted","Data":"07ac8414a1fdd41d2e90de12fda1cf2bff6405c5cb844214a3506dc62eb63b6b"} Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.839948 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0b7bd67c-3461-45d4-b661-d2df2e35b2f8","Type":"ContainerStarted","Data":"e64ea57e78b943bf91b0377bbfe119e657a59bb293100c3439ec611b9aa81cfe"} Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.860241 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jx2z\" (UniqueName: \"kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-kube-api-access-6jx2z\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.860318 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.860350 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98e0f6d-c3fa-4931-a452-dbe169ae113d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.860382 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.860431 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f98e0f6d-c3fa-4931-a452-dbe169ae113d-cache\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.860720 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f98e0f6d-c3fa-4931-a452-dbe169ae113d-lock\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.906313 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.543272366 podStartE2EDuration="2.906288371s" podCreationTimestamp="2026-02-24 10:18:02 +0000 UTC" firstStartedPulling="2026-02-24 10:18:02.965709651 +0000 UTC m=+1387.422232194" lastFinishedPulling="2026-02-24 10:18:04.328725656 +0000 UTC m=+1388.785248199" observedRunningTime="2026-02-24 10:18:04.899311285 +0000 UTC m=+1389.355895710" watchObservedRunningTime="2026-02-24 10:18:04.906288371 +0000 UTC m=+1389.362810914" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.961746 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f98e0f6d-c3fa-4931-a452-dbe169ae113d-lock\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.961838 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jx2z\" (UniqueName: \"kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-kube-api-access-6jx2z\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.961873 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.961892 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98e0f6d-c3fa-4931-a452-dbe169ae113d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.961908 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.961940 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f98e0f6d-c3fa-4931-a452-dbe169ae113d-cache\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.962837 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f98e0f6d-c3fa-4931-a452-dbe169ae113d-lock\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: E0224 10:18:04.962920 4755 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 10:18:04 crc kubenswrapper[4755]: E0224 10:18:04.962932 4755 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 10:18:04 crc kubenswrapper[4755]: E0224 10:18:04.962977 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift podName:f98e0f6d-c3fa-4931-a452-dbe169ae113d nodeName:}" failed. No retries permitted until 2026-02-24 10:18:05.462962299 +0000 UTC m=+1389.919484842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift") pod "swift-storage-0" (UID: "f98e0f6d-c3fa-4931-a452-dbe169ae113d") : configmap "swift-ring-files" not found Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.963769 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f98e0f6d-c3fa-4931-a452-dbe169ae113d-cache\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.963959 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.969027 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98e0f6d-c3fa-4931-a452-dbe169ae113d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:04 crc kubenswrapper[4755]: I0224 10:18:04.980336 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jx2z\" (UniqueName: \"kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-kube-api-access-6jx2z\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.041969 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.082164 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50400: no serving certificate available for the kubelet" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.259347 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.339096 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-hv6bz"] Feb 24 10:18:05 crc kubenswrapper[4755]: E0224 10:18:05.339797 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2364c725-9de7-41e3-8150-a73f93090496" containerName="init" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.339816 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="2364c725-9de7-41e3-8150-a73f93090496" containerName="init" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.340134 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="2364c725-9de7-41e3-8150-a73f93090496" containerName="init" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.340863 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.346432 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.346775 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.346996 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.353875 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hv6bz"] Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.376296 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsk9f\" (UniqueName: \"kubernetes.io/projected/2364c725-9de7-41e3-8150-a73f93090496-kube-api-access-vsk9f\") pod \"2364c725-9de7-41e3-8150-a73f93090496\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.376468 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-ovsdbserver-sb\") pod \"2364c725-9de7-41e3-8150-a73f93090496\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.376537 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-config\") pod \"2364c725-9de7-41e3-8150-a73f93090496\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.376572 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-ovsdbserver-nb\") pod \"2364c725-9de7-41e3-8150-a73f93090496\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.376660 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-dns-svc\") pod \"2364c725-9de7-41e3-8150-a73f93090496\" (UID: \"2364c725-9de7-41e3-8150-a73f93090496\") " Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.376808 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/184ff6eb-4f43-4080-b895-08b61e26f3b1-scripts\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.376859 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-swiftconf\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.376909 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-dispersionconf\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.377020 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/184ff6eb-4f43-4080-b895-08b61e26f3b1-etc-swift\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.377048 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-combined-ca-bundle\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.377086 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wncg\" (UniqueName: \"kubernetes.io/projected/184ff6eb-4f43-4080-b895-08b61e26f3b1-kube-api-access-4wncg\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.377106 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/184ff6eb-4f43-4080-b895-08b61e26f3b1-ring-data-devices\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.385050 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2364c725-9de7-41e3-8150-a73f93090496-kube-api-access-vsk9f" (OuterVolumeSpecName: "kube-api-access-vsk9f") pod "2364c725-9de7-41e3-8150-a73f93090496" (UID: "2364c725-9de7-41e3-8150-a73f93090496"). InnerVolumeSpecName "kube-api-access-vsk9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.422563 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2364c725-9de7-41e3-8150-a73f93090496" (UID: "2364c725-9de7-41e3-8150-a73f93090496"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.424484 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2364c725-9de7-41e3-8150-a73f93090496" (UID: "2364c725-9de7-41e3-8150-a73f93090496"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.424860 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2364c725-9de7-41e3-8150-a73f93090496" (UID: "2364c725-9de7-41e3-8150-a73f93090496"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.432707 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-config" (OuterVolumeSpecName: "config") pod "2364c725-9de7-41e3-8150-a73f93090496" (UID: "2364c725-9de7-41e3-8150-a73f93090496"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.480215 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-dispersionconf\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.484714 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.484848 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/184ff6eb-4f43-4080-b895-08b61e26f3b1-etc-swift\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.484879 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-combined-ca-bundle\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.484917 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wncg\" (UniqueName: \"kubernetes.io/projected/184ff6eb-4f43-4080-b895-08b61e26f3b1-kube-api-access-4wncg\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.484963 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/184ff6eb-4f43-4080-b895-08b61e26f3b1-ring-data-devices\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.484995 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/184ff6eb-4f43-4080-b895-08b61e26f3b1-scripts\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.485256 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-swiftconf\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.485502 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-config\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.485515 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.485525 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.485535 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsk9f\" (UniqueName: \"kubernetes.io/projected/2364c725-9de7-41e3-8150-a73f93090496-kube-api-access-vsk9f\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.485545 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2364c725-9de7-41e3-8150-a73f93090496-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:05 crc kubenswrapper[4755]: E0224 10:18:05.485653 4755 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 10:18:05 crc kubenswrapper[4755]: E0224 10:18:05.485665 4755 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 10:18:05 crc kubenswrapper[4755]: E0224 10:18:05.485718 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift podName:f98e0f6d-c3fa-4931-a452-dbe169ae113d nodeName:}" failed. No retries permitted until 2026-02-24 10:18:06.485696122 +0000 UTC m=+1390.942218735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift") pod "swift-storage-0" (UID: "f98e0f6d-c3fa-4931-a452-dbe169ae113d") : configmap "swift-ring-files" not found Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.483959 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-dispersionconf\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.486308 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/184ff6eb-4f43-4080-b895-08b61e26f3b1-etc-swift\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.486770 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/184ff6eb-4f43-4080-b895-08b61e26f3b1-ring-data-devices\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.488647 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/184ff6eb-4f43-4080-b895-08b61e26f3b1-scripts\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.489627 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-combined-ca-bundle\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.491403 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-swiftconf\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.509360 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wncg\" (UniqueName: \"kubernetes.io/projected/184ff6eb-4f43-4080-b895-08b61e26f3b1-kube-api-access-4wncg\") pod \"swift-ring-rebalance-hv6bz\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.780956 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.853723 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" event={"ID":"2364c725-9de7-41e3-8150-a73f93090496","Type":"ContainerDied","Data":"3b32120a36d7567f650e66e7cb9969e979fc7edb731d40324f74624d9505cae8"} Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.853805 4755 scope.go:117] "RemoveContainer" containerID="8a28f8598b9bada23134fb4cdc2a659ef07db90ddcb1eec357a80fd7a5dbbd2a" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.853996 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f9bcc5599-rpkkn" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.861971 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" event={"ID":"a75a7d6b-7798-43f7-86b4-539d9df80312","Type":"ContainerStarted","Data":"4fb369252712ae431e33c738d3a5495dd3386431677c2e87fe761f8b46b5f2d2"} Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.862393 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.867355 4755 generic.go:334] "Generic (PLEG): container finished" podID="667faef0-2cfe-417b-9872-a36c104256aa" containerID="e437b39b5fa6d8ce8c5be0ef0c8d2711a2bb8c936a2d13435eaf173227ccb0b9" exitCode=0 Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.868461 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" event={"ID":"667faef0-2cfe-417b-9872-a36c104256aa","Type":"ContainerDied","Data":"e437b39b5fa6d8ce8c5be0ef0c8d2711a2bb8c936a2d13435eaf173227ccb0b9"} Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.868841 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.898095 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" podStartSLOduration=4.898056072 podStartE2EDuration="4.898056072s" podCreationTimestamp="2026-02-24 10:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:18:05.892531581 +0000 UTC m=+1390.349054144" watchObservedRunningTime="2026-02-24 10:18:05.898056072 +0000 UTC m=+1390.354578625" Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.934788 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f9bcc5599-rpkkn"] Feb 24 10:18:05 crc kubenswrapper[4755]: I0224 10:18:05.943898 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f9bcc5599-rpkkn"] Feb 24 10:18:06 crc kubenswrapper[4755]: I0224 10:18:06.291690 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hv6bz"] Feb 24 10:18:06 crc kubenswrapper[4755]: W0224 10:18:06.299884 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod184ff6eb_4f43_4080_b895_08b61e26f3b1.slice/crio-b36d100a750257367d1bdcc528c383c12c0551e8227c3fda73b2fa7696f7a2a1 WatchSource:0}: Error finding container b36d100a750257367d1bdcc528c383c12c0551e8227c3fda73b2fa7696f7a2a1: Status 404 returned error can't find the container with id b36d100a750257367d1bdcc528c383c12c0551e8227c3fda73b2fa7696f7a2a1 Feb 24 10:18:06 crc kubenswrapper[4755]: I0224 10:18:06.313338 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50412: no serving certificate available for the kubelet" Feb 24 10:18:06 crc kubenswrapper[4755]: I0224 10:18:06.330267 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2364c725-9de7-41e3-8150-a73f93090496" path="/var/lib/kubelet/pods/2364c725-9de7-41e3-8150-a73f93090496/volumes" Feb 24 10:18:06 crc kubenswrapper[4755]: I0224 10:18:06.502955 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:06 crc kubenswrapper[4755]: E0224 10:18:06.503172 4755 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 10:18:06 crc kubenswrapper[4755]: E0224 10:18:06.503422 4755 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 10:18:06 crc kubenswrapper[4755]: E0224 10:18:06.503505 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift podName:f98e0f6d-c3fa-4931-a452-dbe169ae113d nodeName:}" failed. No retries permitted until 2026-02-24 10:18:08.503484201 +0000 UTC m=+1392.960006824 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift") pod "swift-storage-0" (UID: "f98e0f6d-c3fa-4931-a452-dbe169ae113d") : configmap "swift-ring-files" not found Feb 24 10:18:06 crc kubenswrapper[4755]: I0224 10:18:06.883410 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hv6bz" event={"ID":"184ff6eb-4f43-4080-b895-08b61e26f3b1","Type":"ContainerStarted","Data":"b36d100a750257367d1bdcc528c383c12c0551e8227c3fda73b2fa7696f7a2a1"} Feb 24 10:18:06 crc kubenswrapper[4755]: I0224 10:18:06.890537 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" event={"ID":"667faef0-2cfe-417b-9872-a36c104256aa","Type":"ContainerStarted","Data":"791d67ff7ebf4db0e3fb4bb159b6befa3ec406d845e00697fec5f290388984df"} Feb 24 10:18:06 crc kubenswrapper[4755]: I0224 10:18:06.916834 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" podStartSLOduration=3.91681302 podStartE2EDuration="3.91681302s" podCreationTimestamp="2026-02-24 10:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:18:06.912319581 +0000 UTC m=+1391.368842184" watchObservedRunningTime="2026-02-24 10:18:06.91681302 +0000 UTC m=+1391.373335573" Feb 24 10:18:07 crc kubenswrapper[4755]: I0224 10:18:07.899513 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:08 crc kubenswrapper[4755]: I0224 10:18:08.131263 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50414: no serving certificate available for the kubelet" Feb 24 10:18:08 crc kubenswrapper[4755]: I0224 10:18:08.543997 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:08 crc kubenswrapper[4755]: E0224 10:18:08.544275 4755 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 10:18:08 crc kubenswrapper[4755]: E0224 10:18:08.544737 4755 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 10:18:08 crc kubenswrapper[4755]: E0224 10:18:08.544836 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift podName:f98e0f6d-c3fa-4931-a452-dbe169ae113d nodeName:}" failed. No retries permitted until 2026-02-24 10:18:12.544810434 +0000 UTC m=+1397.001332987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift") pod "swift-storage-0" (UID: "f98e0f6d-c3fa-4931-a452-dbe169ae113d") : configmap "swift-ring-files" not found Feb 24 10:18:09 crc kubenswrapper[4755]: I0224 10:18:09.369628 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50418: no serving certificate available for the kubelet" Feb 24 10:18:09 crc kubenswrapper[4755]: I0224 10:18:09.887328 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 10:18:09 crc kubenswrapper[4755]: I0224 10:18:09.887406 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:18:09 crc kubenswrapper[4755]: I0224 10:18:09.918879 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hv6bz" event={"ID":"184ff6eb-4f43-4080-b895-08b61e26f3b1","Type":"ContainerStarted","Data":"72a6dfde2d9ecfd5febac9dee27fbb1baa6256098b0b0a407252299bd93b4da8"} Feb 24 10:18:09 crc kubenswrapper[4755]: I0224 10:18:09.959863 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-hv6bz" podStartSLOduration=2.179765236 podStartE2EDuration="4.959833334s" podCreationTimestamp="2026-02-24 10:18:05 +0000 UTC" firstStartedPulling="2026-02-24 10:18:06.30293633 +0000 UTC m=+1390.759458903" lastFinishedPulling="2026-02-24 10:18:09.083004458 +0000 UTC m=+1393.539527001" observedRunningTime="2026-02-24 10:18:09.947388757 +0000 UTC m=+1394.403911340" watchObservedRunningTime="2026-02-24 10:18:09.959833334 +0000 UTC m=+1394.416355907" Feb 24 10:18:11 crc kubenswrapper[4755]: I0224 10:18:11.185122 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50422: no serving certificate available for the kubelet" Feb 24 10:18:11 crc kubenswrapper[4755]: I0224 10:18:11.289468 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:18:11 crc kubenswrapper[4755]: I0224 10:18:11.289592 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 10:18:11 crc kubenswrapper[4755]: I0224 10:18:11.935338 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:12 crc kubenswrapper[4755]: I0224 10:18:12.418938 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50424: no serving certificate available for the kubelet" Feb 24 10:18:12 crc kubenswrapper[4755]: I0224 10:18:12.623719 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:12 crc kubenswrapper[4755]: E0224 10:18:12.625056 4755 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 10:18:12 crc kubenswrapper[4755]: E0224 10:18:12.625169 4755 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 10:18:12 crc kubenswrapper[4755]: E0224 10:18:12.625298 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift podName:f98e0f6d-c3fa-4931-a452-dbe169ae113d nodeName:}" failed. No retries permitted until 2026-02-24 10:18:20.625259586 +0000 UTC m=+1405.081782139 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift") pod "swift-storage-0" (UID: "f98e0f6d-c3fa-4931-a452-dbe169ae113d") : configmap "swift-ring-files" not found Feb 24 10:18:13 crc kubenswrapper[4755]: I0224 10:18:13.995148 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.083380 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77c77c9f89-tggz4"] Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.083948 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" podUID="a75a7d6b-7798-43f7-86b4-539d9df80312" containerName="dnsmasq-dns" containerID="cri-o://4fb369252712ae431e33c738d3a5495dd3386431677c2e87fe761f8b46b5f2d2" gracePeriod=10 Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.235112 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45514: no serving certificate available for the kubelet" Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.547492 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.658572 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-dns-svc\") pod \"a75a7d6b-7798-43f7-86b4-539d9df80312\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.658913 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-ovsdbserver-sb\") pod \"a75a7d6b-7798-43f7-86b4-539d9df80312\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.659087 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq2mk\" (UniqueName: \"kubernetes.io/projected/a75a7d6b-7798-43f7-86b4-539d9df80312-kube-api-access-kq2mk\") pod \"a75a7d6b-7798-43f7-86b4-539d9df80312\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.659205 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-config\") pod \"a75a7d6b-7798-43f7-86b4-539d9df80312\" (UID: \"a75a7d6b-7798-43f7-86b4-539d9df80312\") " Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.664776 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a75a7d6b-7798-43f7-86b4-539d9df80312-kube-api-access-kq2mk" (OuterVolumeSpecName: "kube-api-access-kq2mk") pod "a75a7d6b-7798-43f7-86b4-539d9df80312" (UID: "a75a7d6b-7798-43f7-86b4-539d9df80312"). InnerVolumeSpecName "kube-api-access-kq2mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.711329 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a75a7d6b-7798-43f7-86b4-539d9df80312" (UID: "a75a7d6b-7798-43f7-86b4-539d9df80312"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.723701 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a75a7d6b-7798-43f7-86b4-539d9df80312" (UID: "a75a7d6b-7798-43f7-86b4-539d9df80312"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.726593 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-config" (OuterVolumeSpecName: "config") pod "a75a7d6b-7798-43f7-86b4-539d9df80312" (UID: "a75a7d6b-7798-43f7-86b4-539d9df80312"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.762338 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-config\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.762392 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.762405 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a75a7d6b-7798-43f7-86b4-539d9df80312-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.762417 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq2mk\" (UniqueName: \"kubernetes.io/projected/a75a7d6b-7798-43f7-86b4-539d9df80312-kube-api-access-kq2mk\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.972659 4755 generic.go:334] "Generic (PLEG): container finished" podID="a75a7d6b-7798-43f7-86b4-539d9df80312" containerID="4fb369252712ae431e33c738d3a5495dd3386431677c2e87fe761f8b46b5f2d2" exitCode=0 Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.972720 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" event={"ID":"a75a7d6b-7798-43f7-86b4-539d9df80312","Type":"ContainerDied","Data":"4fb369252712ae431e33c738d3a5495dd3386431677c2e87fe761f8b46b5f2d2"} Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.972749 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" event={"ID":"a75a7d6b-7798-43f7-86b4-539d9df80312","Type":"ContainerDied","Data":"c95e58962fdc71c85c092c97746f447dc45e988e449dc698beee50d3fa7312f9"} Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.972766 4755 scope.go:117] "RemoveContainer" containerID="4fb369252712ae431e33c738d3a5495dd3386431677c2e87fe761f8b46b5f2d2" Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.972979 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c77c9f89-tggz4" Feb 24 10:18:14 crc kubenswrapper[4755]: I0224 10:18:14.998627 4755 scope.go:117] "RemoveContainer" containerID="021bd1d528cfd0d025d61b5e72c1d1aba1e5d74198ae355b746b2b09b0751631" Feb 24 10:18:15 crc kubenswrapper[4755]: I0224 10:18:15.005773 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77c77c9f89-tggz4"] Feb 24 10:18:15 crc kubenswrapper[4755]: I0224 10:18:15.014975 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77c77c9f89-tggz4"] Feb 24 10:18:15 crc kubenswrapper[4755]: I0224 10:18:15.030118 4755 scope.go:117] "RemoveContainer" containerID="4fb369252712ae431e33c738d3a5495dd3386431677c2e87fe761f8b46b5f2d2" Feb 24 10:18:15 crc kubenswrapper[4755]: E0224 10:18:15.030696 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fb369252712ae431e33c738d3a5495dd3386431677c2e87fe761f8b46b5f2d2\": container with ID starting with 4fb369252712ae431e33c738d3a5495dd3386431677c2e87fe761f8b46b5f2d2 not found: ID does not exist" containerID="4fb369252712ae431e33c738d3a5495dd3386431677c2e87fe761f8b46b5f2d2" Feb 24 10:18:15 crc kubenswrapper[4755]: I0224 10:18:15.030736 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fb369252712ae431e33c738d3a5495dd3386431677c2e87fe761f8b46b5f2d2"} err="failed to get container status \"4fb369252712ae431e33c738d3a5495dd3386431677c2e87fe761f8b46b5f2d2\": rpc error: code = NotFound desc = could not find container \"4fb369252712ae431e33c738d3a5495dd3386431677c2e87fe761f8b46b5f2d2\": container with ID starting with 4fb369252712ae431e33c738d3a5495dd3386431677c2e87fe761f8b46b5f2d2 not found: ID does not exist" Feb 24 10:18:15 crc kubenswrapper[4755]: I0224 10:18:15.030766 4755 scope.go:117] "RemoveContainer" containerID="021bd1d528cfd0d025d61b5e72c1d1aba1e5d74198ae355b746b2b09b0751631" Feb 24 10:18:15 crc kubenswrapper[4755]: E0224 10:18:15.031036 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"021bd1d528cfd0d025d61b5e72c1d1aba1e5d74198ae355b746b2b09b0751631\": container with ID starting with 021bd1d528cfd0d025d61b5e72c1d1aba1e5d74198ae355b746b2b09b0751631 not found: ID does not exist" containerID="021bd1d528cfd0d025d61b5e72c1d1aba1e5d74198ae355b746b2b09b0751631" Feb 24 10:18:15 crc kubenswrapper[4755]: I0224 10:18:15.031099 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021bd1d528cfd0d025d61b5e72c1d1aba1e5d74198ae355b746b2b09b0751631"} err="failed to get container status \"021bd1d528cfd0d025d61b5e72c1d1aba1e5d74198ae355b746b2b09b0751631\": rpc error: code = NotFound desc = could not find container \"021bd1d528cfd0d025d61b5e72c1d1aba1e5d74198ae355b746b2b09b0751631\": container with ID starting with 021bd1d528cfd0d025d61b5e72c1d1aba1e5d74198ae355b746b2b09b0751631 not found: ID does not exist" Feb 24 10:18:15 crc kubenswrapper[4755]: I0224 10:18:15.470149 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45528: no serving certificate available for the kubelet" Feb 24 10:18:15 crc kubenswrapper[4755]: I0224 10:18:15.986806 4755 generic.go:334] "Generic (PLEG): container finished" podID="184ff6eb-4f43-4080-b895-08b61e26f3b1" containerID="72a6dfde2d9ecfd5febac9dee27fbb1baa6256098b0b0a407252299bd93b4da8" exitCode=0 Feb 24 10:18:15 crc kubenswrapper[4755]: I0224 10:18:15.986906 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hv6bz" event={"ID":"184ff6eb-4f43-4080-b895-08b61e26f3b1","Type":"ContainerDied","Data":"72a6dfde2d9ecfd5febac9dee27fbb1baa6256098b0b0a407252299bd93b4da8"} Feb 24 10:18:16 crc kubenswrapper[4755]: I0224 10:18:16.334730 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a75a7d6b-7798-43f7-86b4-539d9df80312" path="/var/lib/kubelet/pods/a75a7d6b-7798-43f7-86b4-539d9df80312/volumes" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.279730 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45544: no serving certificate available for the kubelet" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.393306 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.517238 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wncg\" (UniqueName: \"kubernetes.io/projected/184ff6eb-4f43-4080-b895-08b61e26f3b1-kube-api-access-4wncg\") pod \"184ff6eb-4f43-4080-b895-08b61e26f3b1\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.517316 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-combined-ca-bundle\") pod \"184ff6eb-4f43-4080-b895-08b61e26f3b1\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.517366 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-swiftconf\") pod \"184ff6eb-4f43-4080-b895-08b61e26f3b1\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.518380 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/184ff6eb-4f43-4080-b895-08b61e26f3b1-scripts\") pod \"184ff6eb-4f43-4080-b895-08b61e26f3b1\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.518506 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/184ff6eb-4f43-4080-b895-08b61e26f3b1-etc-swift\") pod \"184ff6eb-4f43-4080-b895-08b61e26f3b1\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.518660 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-dispersionconf\") pod \"184ff6eb-4f43-4080-b895-08b61e26f3b1\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.518746 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/184ff6eb-4f43-4080-b895-08b61e26f3b1-ring-data-devices\") pod \"184ff6eb-4f43-4080-b895-08b61e26f3b1\" (UID: \"184ff6eb-4f43-4080-b895-08b61e26f3b1\") " Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.519818 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/184ff6eb-4f43-4080-b895-08b61e26f3b1-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "184ff6eb-4f43-4080-b895-08b61e26f3b1" (UID: "184ff6eb-4f43-4080-b895-08b61e26f3b1"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.520168 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184ff6eb-4f43-4080-b895-08b61e26f3b1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "184ff6eb-4f43-4080-b895-08b61e26f3b1" (UID: "184ff6eb-4f43-4080-b895-08b61e26f3b1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.524720 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/184ff6eb-4f43-4080-b895-08b61e26f3b1-kube-api-access-4wncg" (OuterVolumeSpecName: "kube-api-access-4wncg") pod "184ff6eb-4f43-4080-b895-08b61e26f3b1" (UID: "184ff6eb-4f43-4080-b895-08b61e26f3b1"). InnerVolumeSpecName "kube-api-access-4wncg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.527297 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "184ff6eb-4f43-4080-b895-08b61e26f3b1" (UID: "184ff6eb-4f43-4080-b895-08b61e26f3b1"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.551197 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "184ff6eb-4f43-4080-b895-08b61e26f3b1" (UID: "184ff6eb-4f43-4080-b895-08b61e26f3b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.561550 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/184ff6eb-4f43-4080-b895-08b61e26f3b1-scripts" (OuterVolumeSpecName: "scripts") pod "184ff6eb-4f43-4080-b895-08b61e26f3b1" (UID: "184ff6eb-4f43-4080-b895-08b61e26f3b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.562247 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "184ff6eb-4f43-4080-b895-08b61e26f3b1" (UID: "184ff6eb-4f43-4080-b895-08b61e26f3b1"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.621230 4755 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.621263 4755 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/184ff6eb-4f43-4080-b895-08b61e26f3b1-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.621278 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wncg\" (UniqueName: \"kubernetes.io/projected/184ff6eb-4f43-4080-b895-08b61e26f3b1-kube-api-access-4wncg\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.621292 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.621313 4755 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/184ff6eb-4f43-4080-b895-08b61e26f3b1-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.621325 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/184ff6eb-4f43-4080-b895-08b61e26f3b1-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:17 crc kubenswrapper[4755]: I0224 10:18:17.621337 4755 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/184ff6eb-4f43-4080-b895-08b61e26f3b1-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:18 crc kubenswrapper[4755]: I0224 10:18:18.013573 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hv6bz" event={"ID":"184ff6eb-4f43-4080-b895-08b61e26f3b1","Type":"ContainerDied","Data":"b36d100a750257367d1bdcc528c383c12c0551e8227c3fda73b2fa7696f7a2a1"} Feb 24 10:18:18 crc kubenswrapper[4755]: I0224 10:18:18.013636 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b36d100a750257367d1bdcc528c383c12c0551e8227c3fda73b2fa7696f7a2a1" Feb 24 10:18:18 crc kubenswrapper[4755]: I0224 10:18:18.013676 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hv6bz" Feb 24 10:18:18 crc kubenswrapper[4755]: I0224 10:18:18.521563 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45558: no serving certificate available for the kubelet" Feb 24 10:18:19 crc kubenswrapper[4755]: I0224 10:18:19.022309 4755 generic.go:334] "Generic (PLEG): container finished" podID="36ea9987-0fd2-4c40-845d-463254c1fecf" containerID="3bbfb182a229dbbd646b146a6041ddeaa1238db4c51ce14ad502b3e53e080b8a" exitCode=0 Feb 24 10:18:19 crc kubenswrapper[4755]: I0224 10:18:19.022390 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"36ea9987-0fd2-4c40-845d-463254c1fecf","Type":"ContainerDied","Data":"3bbfb182a229dbbd646b146a6041ddeaa1238db4c51ce14ad502b3e53e080b8a"} Feb 24 10:18:19 crc kubenswrapper[4755]: I0224 10:18:19.025321 4755 generic.go:334] "Generic (PLEG): container finished" podID="6f17c8cb-fdce-4338-a08b-351043733dd8" containerID="4398a9085d6666f29c47caee783a17004fe8ee15c1174181c3dc3c79c676743b" exitCode=0 Feb 24 10:18:19 crc kubenswrapper[4755]: I0224 10:18:19.025362 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6f17c8cb-fdce-4338-a08b-351043733dd8","Type":"ContainerDied","Data":"4398a9085d6666f29c47caee783a17004fe8ee15c1174181c3dc3c79c676743b"} Feb 24 10:18:20 crc kubenswrapper[4755]: I0224 10:18:20.037750 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6f17c8cb-fdce-4338-a08b-351043733dd8","Type":"ContainerStarted","Data":"2c6ce70f908f76c83d3260ef0951dc475e16893eb759a1cf02b6b3a50c0646b7"} Feb 24 10:18:20 crc kubenswrapper[4755]: I0224 10:18:20.038290 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 24 10:18:20 crc kubenswrapper[4755]: I0224 10:18:20.040344 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"36ea9987-0fd2-4c40-845d-463254c1fecf","Type":"ContainerStarted","Data":"3229e39cc177b569281a8854005fc104570e29cd3a0c422d9107c416cfebbe3e"} Feb 24 10:18:20 crc kubenswrapper[4755]: I0224 10:18:20.041058 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:18:20 crc kubenswrapper[4755]: I0224 10:18:20.066746 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.827395457 podStartE2EDuration="54.066714333s" podCreationTimestamp="2026-02-24 10:17:26 +0000 UTC" firstStartedPulling="2026-02-24 10:17:28.095411814 +0000 UTC m=+1352.551934357" lastFinishedPulling="2026-02-24 10:17:45.33473065 +0000 UTC m=+1369.791253233" observedRunningTime="2026-02-24 10:18:20.064644649 +0000 UTC m=+1404.521167192" watchObservedRunningTime="2026-02-24 10:18:20.066714333 +0000 UTC m=+1404.523236916" Feb 24 10:18:20 crc kubenswrapper[4755]: I0224 10:18:20.095353 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.707974669 podStartE2EDuration="53.095335381s" podCreationTimestamp="2026-02-24 10:17:27 +0000 UTC" firstStartedPulling="2026-02-24 10:17:29.019684821 +0000 UTC m=+1353.476207364" lastFinishedPulling="2026-02-24 10:17:45.407045523 +0000 UTC m=+1369.863568076" observedRunningTime="2026-02-24 10:18:20.09175238 +0000 UTC m=+1404.548274923" watchObservedRunningTime="2026-02-24 10:18:20.095335381 +0000 UTC m=+1404.551857924" Feb 24 10:18:20 crc kubenswrapper[4755]: I0224 10:18:20.324684 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45560: no serving certificate available for the kubelet" Feb 24 10:18:20 crc kubenswrapper[4755]: I0224 10:18:20.686586 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:20 crc kubenswrapper[4755]: I0224 10:18:20.696681 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f98e0f6d-c3fa-4931-a452-dbe169ae113d-etc-swift\") pod \"swift-storage-0\" (UID: \"f98e0f6d-c3fa-4931-a452-dbe169ae113d\") " pod="openstack/swift-storage-0" Feb 24 10:18:20 crc kubenswrapper[4755]: I0224 10:18:20.742634 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 24 10:18:21 crc kubenswrapper[4755]: W0224 10:18:21.364241 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf98e0f6d_c3fa_4931_a452_dbe169ae113d.slice/crio-df26271728bf92413e27249aa817307cd3719ccb7a25511dbf2e29c6cbc72f0c WatchSource:0}: Error finding container df26271728bf92413e27249aa817307cd3719ccb7a25511dbf2e29c6cbc72f0c: Status 404 returned error can't find the container with id df26271728bf92413e27249aa817307cd3719ccb7a25511dbf2e29c6cbc72f0c Feb 24 10:18:21 crc kubenswrapper[4755]: I0224 10:18:21.369753 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 24 10:18:21 crc kubenswrapper[4755]: I0224 10:18:21.572651 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45574: no serving certificate available for the kubelet" Feb 24 10:18:21 crc kubenswrapper[4755]: I0224 10:18:21.695469 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:18:21 crc kubenswrapper[4755]: I0224 10:18:21.695790 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:18:22 crc kubenswrapper[4755]: I0224 10:18:22.082675 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"df26271728bf92413e27249aa817307cd3719ccb7a25511dbf2e29c6cbc72f0c"} Feb 24 10:18:22 crc kubenswrapper[4755]: I0224 10:18:22.555877 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 24 10:18:23 crc kubenswrapper[4755]: I0224 10:18:23.097880 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"ecd4105e69d2278a1e06df671ef0d2a731a535334e104142f85e0e84543801f5"} Feb 24 10:18:23 crc kubenswrapper[4755]: I0224 10:18:23.098233 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"c17bf47e8363442a9dd95a47e247e23b2e9ef9b592333b940ef94bcea1b14651"} Feb 24 10:18:23 crc kubenswrapper[4755]: I0224 10:18:23.366821 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45586: no serving certificate available for the kubelet" Feb 24 10:18:24 crc kubenswrapper[4755]: I0224 10:18:24.111717 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"ae422021a5a81942fac0c1eb614d7135e7b58621e92a6d43b40de113f3ace62a"} Feb 24 10:18:24 crc kubenswrapper[4755]: I0224 10:18:24.111976 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"23c1b8804e5e1f628473ac4d756fbb6aaf88ce522efe105b8e5c7a456218ecc7"} Feb 24 10:18:24 crc kubenswrapper[4755]: I0224 10:18:24.626268 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52262: no serving certificate available for the kubelet" Feb 24 10:18:25 crc kubenswrapper[4755]: I0224 10:18:25.121363 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"3ab4c75833ad8359ed42521f7c1036402cfde404a728309c92650ae8bac249d7"} Feb 24 10:18:25 crc kubenswrapper[4755]: I0224 10:18:25.121408 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"c0bfdbd61a8cd837b103876d740ab6e0aec5db8373921f917716ecb592b8d98c"} Feb 24 10:18:25 crc kubenswrapper[4755]: I0224 10:18:25.121418 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"7c755dddc6b872bccf74ebaf30a13f4e54c7eb7a135c255d8eb91113f27d4a72"} Feb 24 10:18:25 crc kubenswrapper[4755]: I0224 10:18:25.121428 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"c06be988dc20cb8629ece463e4a9dee1274689c13e8b8db7daff7d942f5eb873"} Feb 24 10:18:26 crc kubenswrapper[4755]: I0224 10:18:26.549167 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52270: no serving certificate available for the kubelet" Feb 24 10:18:27 crc kubenswrapper[4755]: I0224 10:18:27.409986 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7zjmh" podUID="22e0eb37-c9e8-4e7c-a986-09c4c7700bd7" containerName="ovn-controller" probeResult="failure" output=< Feb 24 10:18:27 crc kubenswrapper[4755]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 24 10:18:27 crc kubenswrapper[4755]: > Feb 24 10:18:27 crc kubenswrapper[4755]: I0224 10:18:27.684597 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52276: no serving certificate available for the kubelet" Feb 24 10:18:29 crc kubenswrapper[4755]: I0224 10:18:29.162075 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"fac924994ac017e87fb5471a772277b379d1ceca53522f4e586c84c7bae00f1a"} Feb 24 10:18:29 crc kubenswrapper[4755]: I0224 10:18:29.162717 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"8c8aff8d9cb47cc237c1a08467618083517d32f1f6dfbcf98903e3ef857d1b65"} Feb 24 10:18:29 crc kubenswrapper[4755]: I0224 10:18:29.162733 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"3bc5d643c0e3691826047a4ec53668d05ec243f253cc42e266d762f18c6b55e8"} Feb 24 10:18:29 crc kubenswrapper[4755]: I0224 10:18:29.162745 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"8c0d6bd3f31c5ac43e46f37ff62334e1c0442e1e0e9206f623b909ce1c4c9d37"} Feb 24 10:18:29 crc kubenswrapper[4755]: I0224 10:18:29.162760 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"23c770041752a1f676a231c4f50542b522e984369d85db7dc225a4bc64b23087"} Feb 24 10:18:29 crc kubenswrapper[4755]: I0224 10:18:29.162772 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"ac285972e3ede69c60ed57acab4c98851dbc6122fbe719e2acfc26ced6537c3d"} Feb 24 10:18:29 crc kubenswrapper[4755]: I0224 10:18:29.597911 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52288: no serving certificate available for the kubelet" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.184897 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f98e0f6d-c3fa-4931-a452-dbe169ae113d","Type":"ContainerStarted","Data":"9887f945a522de1a85d4c2eec972472be3fa62631cb522e0ddf812a8be579c4f"} Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.253036 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.609635492 podStartE2EDuration="27.253019326s" podCreationTimestamp="2026-02-24 10:18:03 +0000 UTC" firstStartedPulling="2026-02-24 10:18:21.366636692 +0000 UTC m=+1405.823159235" lastFinishedPulling="2026-02-24 10:18:28.010020526 +0000 UTC m=+1412.466543069" observedRunningTime="2026-02-24 10:18:30.24669688 +0000 UTC m=+1414.703219463" watchObservedRunningTime="2026-02-24 10:18:30.253019326 +0000 UTC m=+1414.709541869" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.611160 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785cfccf4f-x8pjc"] Feb 24 10:18:30 crc kubenswrapper[4755]: E0224 10:18:30.611475 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75a7d6b-7798-43f7-86b4-539d9df80312" containerName="init" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.611490 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75a7d6b-7798-43f7-86b4-539d9df80312" containerName="init" Feb 24 10:18:30 crc kubenswrapper[4755]: E0224 10:18:30.611501 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184ff6eb-4f43-4080-b895-08b61e26f3b1" containerName="swift-ring-rebalance" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.611508 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="184ff6eb-4f43-4080-b895-08b61e26f3b1" containerName="swift-ring-rebalance" Feb 24 10:18:30 crc kubenswrapper[4755]: E0224 10:18:30.611528 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75a7d6b-7798-43f7-86b4-539d9df80312" containerName="dnsmasq-dns" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.611535 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75a7d6b-7798-43f7-86b4-539d9df80312" containerName="dnsmasq-dns" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.611662 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75a7d6b-7798-43f7-86b4-539d9df80312" containerName="dnsmasq-dns" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.611681 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="184ff6eb-4f43-4080-b895-08b61e26f3b1" containerName="swift-ring-rebalance" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.627724 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.639936 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.653395 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-ovsdbserver-sb\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.653500 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tpvg\" (UniqueName: \"kubernetes.io/projected/18daa7df-3f43-42f1-aa1e-264e90ef1d65-kube-api-access-6tpvg\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.653564 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-config\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.653596 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-dns-swift-storage-0\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.653642 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-ovsdbserver-nb\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.653678 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-dns-svc\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.654028 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785cfccf4f-x8pjc"] Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.692659 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52292: no serving certificate available for the kubelet" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.722632 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52304: no serving certificate available for the kubelet" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.754772 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tpvg\" (UniqueName: \"kubernetes.io/projected/18daa7df-3f43-42f1-aa1e-264e90ef1d65-kube-api-access-6tpvg\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.754843 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-config\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.754871 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-dns-swift-storage-0\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.754903 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-ovsdbserver-nb\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.754928 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-dns-svc\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.754963 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-ovsdbserver-sb\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.755854 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-ovsdbserver-sb\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.755923 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-ovsdbserver-nb\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.756320 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-dns-svc\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.756839 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-config\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.757142 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18daa7df-3f43-42f1-aa1e-264e90ef1d65-dns-swift-storage-0\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.775383 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tpvg\" (UniqueName: \"kubernetes.io/projected/18daa7df-3f43-42f1-aa1e-264e90ef1d65-kube-api-access-6tpvg\") pod \"dnsmasq-dns-785cfccf4f-x8pjc\" (UID: \"18daa7df-3f43-42f1-aa1e-264e90ef1d65\") " pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:30 crc kubenswrapper[4755]: I0224 10:18:30.951350 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:31 crc kubenswrapper[4755]: I0224 10:18:31.391896 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785cfccf4f-x8pjc"] Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.223755 4755 generic.go:334] "Generic (PLEG): container finished" podID="18daa7df-3f43-42f1-aa1e-264e90ef1d65" containerID="65ec0dfbb15f5885646a8a4462a48a64b692b6d1c00dbe3d8ae7fda108d46a92" exitCode=0 Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.223880 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" event={"ID":"18daa7df-3f43-42f1-aa1e-264e90ef1d65","Type":"ContainerDied","Data":"65ec0dfbb15f5885646a8a4462a48a64b692b6d1c00dbe3d8ae7fda108d46a92"} Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.224406 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" event={"ID":"18daa7df-3f43-42f1-aa1e-264e90ef1d65","Type":"ContainerStarted","Data":"256bd23c48f20e31f79f6636edb3de0abd0c86cc73f157ecd2dc9bc71bb37bf7"} Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.414532 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7zjmh" podUID="22e0eb37-c9e8-4e7c-a986-09c4c7700bd7" containerName="ovn-controller" probeResult="failure" output=< Feb 24 10:18:32 crc kubenswrapper[4755]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 24 10:18:32 crc kubenswrapper[4755]: > Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.437985 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.448813 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-956vf" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.634427 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52308: no serving certificate available for the kubelet" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.678434 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7zjmh-config-snvmj"] Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.680238 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.685222 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.688021 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7zjmh-config-snvmj"] Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.806254 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-scripts\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.806429 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-run\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.806483 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-additional-scripts\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.806658 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-log-ovn\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.806760 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-run-ovn\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.806855 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlg9t\" (UniqueName: \"kubernetes.io/projected/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-kube-api-access-hlg9t\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.908793 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-log-ovn\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.908856 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-run-ovn\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.908901 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlg9t\" (UniqueName: \"kubernetes.io/projected/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-kube-api-access-hlg9t\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.908943 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-scripts\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.908993 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-run\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.909011 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-additional-scripts\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.909085 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-log-ovn\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.909104 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-run-ovn\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.909224 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-run\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.909731 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-additional-scripts\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.910947 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-scripts\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:32 crc kubenswrapper[4755]: I0224 10:18:32.933646 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlg9t\" (UniqueName: \"kubernetes.io/projected/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-kube-api-access-hlg9t\") pod \"ovn-controller-7zjmh-config-snvmj\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:33 crc kubenswrapper[4755]: I0224 10:18:33.020341 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:33 crc kubenswrapper[4755]: I0224 10:18:33.241562 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" event={"ID":"18daa7df-3f43-42f1-aa1e-264e90ef1d65","Type":"ContainerStarted","Data":"92294fb9610dfb31d5d83bec72d1a63abce67bd0ea0b0be55ec2bfe55ed39d2c"} Feb 24 10:18:33 crc kubenswrapper[4755]: I0224 10:18:33.241622 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:33 crc kubenswrapper[4755]: I0224 10:18:33.271249 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" podStartSLOduration=3.271210209 podStartE2EDuration="3.271210209s" podCreationTimestamp="2026-02-24 10:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:18:33.264299125 +0000 UTC m=+1417.720821678" watchObservedRunningTime="2026-02-24 10:18:33.271210209 +0000 UTC m=+1417.727732762" Feb 24 10:18:33 crc kubenswrapper[4755]: I0224 10:18:33.517232 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7zjmh-config-snvmj"] Feb 24 10:18:33 crc kubenswrapper[4755]: W0224 10:18:33.538043 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf57dba48_36c8_4aa9_a0ad_4c040d0d29af.slice/crio-b675a17dab0d1946badd0e7b42119112be8ee6f3376283cfc4bc14b1117b6f2c WatchSource:0}: Error finding container b675a17dab0d1946badd0e7b42119112be8ee6f3376283cfc4bc14b1117b6f2c: Status 404 returned error can't find the container with id b675a17dab0d1946badd0e7b42119112be8ee6f3376283cfc4bc14b1117b6f2c Feb 24 10:18:33 crc kubenswrapper[4755]: I0224 10:18:33.765670 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51330: no serving certificate available for the kubelet" Feb 24 10:18:34 crc kubenswrapper[4755]: I0224 10:18:34.250843 4755 generic.go:334] "Generic (PLEG): container finished" podID="f57dba48-36c8-4aa9-a0ad-4c040d0d29af" containerID="bc5f236581634855b61d26bd8193024771afa0dbab6aab927b373eeb61ec50b8" exitCode=0 Feb 24 10:18:34 crc kubenswrapper[4755]: I0224 10:18:34.250963 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7zjmh-config-snvmj" event={"ID":"f57dba48-36c8-4aa9-a0ad-4c040d0d29af","Type":"ContainerDied","Data":"bc5f236581634855b61d26bd8193024771afa0dbab6aab927b373eeb61ec50b8"} Feb 24 10:18:34 crc kubenswrapper[4755]: I0224 10:18:34.251217 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7zjmh-config-snvmj" event={"ID":"f57dba48-36c8-4aa9-a0ad-4c040d0d29af","Type":"ContainerStarted","Data":"b675a17dab0d1946badd0e7b42119112be8ee6f3376283cfc4bc14b1117b6f2c"} Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.612298 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.688418 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51332: no serving certificate available for the kubelet" Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.698361 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-run\") pod \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.698396 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-scripts\") pod \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.698482 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlg9t\" (UniqueName: \"kubernetes.io/projected/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-kube-api-access-hlg9t\") pod \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.698501 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-log-ovn\") pod \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.698491 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-run" (OuterVolumeSpecName: "var-run") pod "f57dba48-36c8-4aa9-a0ad-4c040d0d29af" (UID: "f57dba48-36c8-4aa9-a0ad-4c040d0d29af"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.698524 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-additional-scripts\") pod \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.698576 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-run-ovn\") pod \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\" (UID: \"f57dba48-36c8-4aa9-a0ad-4c040d0d29af\") " Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.698649 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f57dba48-36c8-4aa9-a0ad-4c040d0d29af" (UID: "f57dba48-36c8-4aa9-a0ad-4c040d0d29af"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.698849 4755 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-run\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.698861 4755 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.698896 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f57dba48-36c8-4aa9-a0ad-4c040d0d29af" (UID: "f57dba48-36c8-4aa9-a0ad-4c040d0d29af"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.699355 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f57dba48-36c8-4aa9-a0ad-4c040d0d29af" (UID: "f57dba48-36c8-4aa9-a0ad-4c040d0d29af"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.700438 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-scripts" (OuterVolumeSpecName: "scripts") pod "f57dba48-36c8-4aa9-a0ad-4c040d0d29af" (UID: "f57dba48-36c8-4aa9-a0ad-4c040d0d29af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.709465 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-kube-api-access-hlg9t" (OuterVolumeSpecName: "kube-api-access-hlg9t") pod "f57dba48-36c8-4aa9-a0ad-4c040d0d29af" (UID: "f57dba48-36c8-4aa9-a0ad-4c040d0d29af"). InnerVolumeSpecName "kube-api-access-hlg9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.800737 4755 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.800775 4755 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.800787 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:35 crc kubenswrapper[4755]: I0224 10:18:35.800800 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlg9t\" (UniqueName: \"kubernetes.io/projected/f57dba48-36c8-4aa9-a0ad-4c040d0d29af-kube-api-access-hlg9t\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:36 crc kubenswrapper[4755]: I0224 10:18:36.273631 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7zjmh-config-snvmj" event={"ID":"f57dba48-36c8-4aa9-a0ad-4c040d0d29af","Type":"ContainerDied","Data":"b675a17dab0d1946badd0e7b42119112be8ee6f3376283cfc4bc14b1117b6f2c"} Feb 24 10:18:36 crc kubenswrapper[4755]: I0224 10:18:36.274009 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b675a17dab0d1946badd0e7b42119112be8ee6f3376283cfc4bc14b1117b6f2c" Feb 24 10:18:36 crc kubenswrapper[4755]: I0224 10:18:36.273714 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7zjmh-config-snvmj" Feb 24 10:18:36 crc kubenswrapper[4755]: I0224 10:18:36.752530 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7zjmh-config-snvmj"] Feb 24 10:18:36 crc kubenswrapper[4755]: I0224 10:18:36.764505 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7zjmh-config-snvmj"] Feb 24 10:18:36 crc kubenswrapper[4755]: I0224 10:18:36.815875 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51338: no serving certificate available for the kubelet" Feb 24 10:18:37 crc kubenswrapper[4755]: I0224 10:18:37.401528 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-7zjmh" Feb 24 10:18:37 crc kubenswrapper[4755]: I0224 10:18:37.622410 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 24 10:18:37 crc kubenswrapper[4755]: I0224 10:18:37.818324 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51352: no serving certificate available for the kubelet" Feb 24 10:18:37 crc kubenswrapper[4755]: I0224 10:18:37.867091 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51368: no serving certificate available for the kubelet" Feb 24 10:18:37 crc kubenswrapper[4755]: I0224 10:18:37.907236 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51384: no serving certificate available for the kubelet" Feb 24 10:18:37 crc kubenswrapper[4755]: I0224 10:18:37.922264 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51388: no serving certificate available for the kubelet" Feb 24 10:18:37 crc kubenswrapper[4755]: I0224 10:18:37.999207 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51404: no serving certificate available for the kubelet" Feb 24 10:18:38 crc kubenswrapper[4755]: I0224 10:18:38.013087 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51412: no serving certificate available for the kubelet" Feb 24 10:18:38 crc kubenswrapper[4755]: I0224 10:18:38.065336 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51424: no serving certificate available for the kubelet" Feb 24 10:18:38 crc kubenswrapper[4755]: I0224 10:18:38.129358 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51438: no serving certificate available for the kubelet" Feb 24 10:18:38 crc kubenswrapper[4755]: I0224 10:18:38.244337 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51446: no serving certificate available for the kubelet" Feb 24 10:18:38 crc kubenswrapper[4755]: I0224 10:18:38.327864 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f57dba48-36c8-4aa9-a0ad-4c040d0d29af" path="/var/lib/kubelet/pods/f57dba48-36c8-4aa9-a0ad-4c040d0d29af/volumes" Feb 24 10:18:38 crc kubenswrapper[4755]: I0224 10:18:38.336437 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51452: no serving certificate available for the kubelet" Feb 24 10:18:38 crc kubenswrapper[4755]: I0224 10:18:38.445129 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 24 10:18:38 crc kubenswrapper[4755]: I0224 10:18:38.612239 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51454: no serving certificate available for the kubelet" Feb 24 10:18:38 crc kubenswrapper[4755]: I0224 10:18:38.722401 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51470: no serving certificate available for the kubelet" Feb 24 10:18:38 crc kubenswrapper[4755]: I0224 10:18:38.734708 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51476: no serving certificate available for the kubelet" Feb 24 10:18:38 crc kubenswrapper[4755]: I0224 10:18:38.910690 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51488: no serving certificate available for the kubelet" Feb 24 10:18:39 crc kubenswrapper[4755]: I0224 10:18:39.021775 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51494: no serving certificate available for the kubelet" Feb 24 10:18:39 crc kubenswrapper[4755]: I0224 10:18:39.123022 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51498: no serving certificate available for the kubelet" Feb 24 10:18:39 crc kubenswrapper[4755]: I0224 10:18:39.289803 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51500: no serving certificate available for the kubelet" Feb 24 10:18:39 crc kubenswrapper[4755]: I0224 10:18:39.473681 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51514: no serving certificate available for the kubelet" Feb 24 10:18:39 crc kubenswrapper[4755]: I0224 10:18:39.690316 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51516: no serving certificate available for the kubelet" Feb 24 10:18:39 crc kubenswrapper[4755]: I0224 10:18:39.754032 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51518: no serving certificate available for the kubelet" Feb 24 10:18:39 crc kubenswrapper[4755]: I0224 10:18:39.852909 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51532: no serving certificate available for the kubelet" Feb 24 10:18:40 crc kubenswrapper[4755]: I0224 10:18:40.185808 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51534: no serving certificate available for the kubelet" Feb 24 10:18:40 crc kubenswrapper[4755]: I0224 10:18:40.927443 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51536: no serving certificate available for the kubelet" Feb 24 10:18:40 crc kubenswrapper[4755]: I0224 10:18:40.953376 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785cfccf4f-x8pjc" Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.074740 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77cf9b784c-lmqn8"] Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.075043 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" podUID="667faef0-2cfe-417b-9872-a36c104256aa" containerName="dnsmasq-dns" containerID="cri-o://791d67ff7ebf4db0e3fb4bb159b6befa3ec406d845e00697fec5f290388984df" gracePeriod=10 Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.096352 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51552: no serving certificate available for the kubelet" Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.310713 4755 generic.go:334] "Generic (PLEG): container finished" podID="667faef0-2cfe-417b-9872-a36c104256aa" containerID="791d67ff7ebf4db0e3fb4bb159b6befa3ec406d845e00697fec5f290388984df" exitCode=0 Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.310763 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" event={"ID":"667faef0-2cfe-417b-9872-a36c104256aa","Type":"ContainerDied","Data":"791d67ff7ebf4db0e3fb4bb159b6befa3ec406d845e00697fec5f290388984df"} Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.549605 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.598305 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-dns-svc\") pod \"667faef0-2cfe-417b-9872-a36c104256aa\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.598459 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-ovsdbserver-sb\") pod \"667faef0-2cfe-417b-9872-a36c104256aa\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.598503 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-ovsdbserver-nb\") pod \"667faef0-2cfe-417b-9872-a36c104256aa\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.598578 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-config\") pod \"667faef0-2cfe-417b-9872-a36c104256aa\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.599507 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nfbh\" (UniqueName: \"kubernetes.io/projected/667faef0-2cfe-417b-9872-a36c104256aa-kube-api-access-8nfbh\") pod \"667faef0-2cfe-417b-9872-a36c104256aa\" (UID: \"667faef0-2cfe-417b-9872-a36c104256aa\") " Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.608350 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/667faef0-2cfe-417b-9872-a36c104256aa-kube-api-access-8nfbh" (OuterVolumeSpecName: "kube-api-access-8nfbh") pod "667faef0-2cfe-417b-9872-a36c104256aa" (UID: "667faef0-2cfe-417b-9872-a36c104256aa"). InnerVolumeSpecName "kube-api-access-8nfbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.648301 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-config" (OuterVolumeSpecName: "config") pod "667faef0-2cfe-417b-9872-a36c104256aa" (UID: "667faef0-2cfe-417b-9872-a36c104256aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.656345 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "667faef0-2cfe-417b-9872-a36c104256aa" (UID: "667faef0-2cfe-417b-9872-a36c104256aa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.670817 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "667faef0-2cfe-417b-9872-a36c104256aa" (UID: "667faef0-2cfe-417b-9872-a36c104256aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.673614 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "667faef0-2cfe-417b-9872-a36c104256aa" (UID: "667faef0-2cfe-417b-9872-a36c104256aa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.705217 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.705508 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.705638 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.705767 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/667faef0-2cfe-417b-9872-a36c104256aa-config\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.705894 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nfbh\" (UniqueName: \"kubernetes.io/projected/667faef0-2cfe-417b-9872-a36c104256aa-kube-api-access-8nfbh\") on node \"crc\" DevicePath \"\"" Feb 24 10:18:41 crc kubenswrapper[4755]: I0224 10:18:41.768911 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51564: no serving certificate available for the kubelet" Feb 24 10:18:42 crc kubenswrapper[4755]: I0224 10:18:42.296553 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51570: no serving certificate available for the kubelet" Feb 24 10:18:42 crc kubenswrapper[4755]: I0224 10:18:42.325259 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" Feb 24 10:18:42 crc kubenswrapper[4755]: I0224 10:18:42.331961 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cf9b784c-lmqn8" event={"ID":"667faef0-2cfe-417b-9872-a36c104256aa","Type":"ContainerDied","Data":"d0b8262c5dc8a8fc34593393d66d46f03ecd824c0fefb5f38ab9ffe05dbafbcd"} Feb 24 10:18:42 crc kubenswrapper[4755]: I0224 10:18:42.332015 4755 scope.go:117] "RemoveContainer" containerID="791d67ff7ebf4db0e3fb4bb159b6befa3ec406d845e00697fec5f290388984df" Feb 24 10:18:42 crc kubenswrapper[4755]: I0224 10:18:42.389732 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77cf9b784c-lmqn8"] Feb 24 10:18:42 crc kubenswrapper[4755]: I0224 10:18:42.395213 4755 scope.go:117] "RemoveContainer" containerID="e437b39b5fa6d8ce8c5be0ef0c8d2711a2bb8c936a2d13435eaf173227ccb0b9" Feb 24 10:18:42 crc kubenswrapper[4755]: I0224 10:18:42.402195 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77cf9b784c-lmqn8"] Feb 24 10:18:42 crc kubenswrapper[4755]: I0224 10:18:42.895125 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51582: no serving certificate available for the kubelet" Feb 24 10:18:43 crc kubenswrapper[4755]: I0224 10:18:43.795214 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53472: no serving certificate available for the kubelet" Feb 24 10:18:44 crc kubenswrapper[4755]: I0224 10:18:44.328229 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="667faef0-2cfe-417b-9872-a36c104256aa" path="/var/lib/kubelet/pods/667faef0-2cfe-417b-9872-a36c104256aa/volumes" Feb 24 10:18:44 crc kubenswrapper[4755]: I0224 10:18:44.823626 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53476: no serving certificate available for the kubelet" Feb 24 10:18:45 crc kubenswrapper[4755]: I0224 10:18:45.017314 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53478: no serving certificate available for the kubelet" Feb 24 10:18:45 crc kubenswrapper[4755]: I0224 10:18:45.952820 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53484: no serving certificate available for the kubelet" Feb 24 10:18:47 crc kubenswrapper[4755]: I0224 10:18:47.876563 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53488: no serving certificate available for the kubelet" Feb 24 10:18:48 crc kubenswrapper[4755]: I0224 10:18:48.991364 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53492: no serving certificate available for the kubelet" Feb 24 10:18:48 crc kubenswrapper[4755]: I0224 10:18:48.994635 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53508: no serving certificate available for the kubelet" Feb 24 10:18:50 crc kubenswrapper[4755]: I0224 10:18:50.294871 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53522: no serving certificate available for the kubelet" Feb 24 10:18:50 crc kubenswrapper[4755]: I0224 10:18:50.940102 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53526: no serving certificate available for the kubelet" Feb 24 10:18:51 crc kubenswrapper[4755]: I0224 10:18:51.694956 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:18:51 crc kubenswrapper[4755]: I0224 10:18:51.695036 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:18:52 crc kubenswrapper[4755]: I0224 10:18:52.043599 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53528: no serving certificate available for the kubelet" Feb 24 10:18:53 crc kubenswrapper[4755]: I0224 10:18:53.979460 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54836: no serving certificate available for the kubelet" Feb 24 10:18:55 crc kubenswrapper[4755]: I0224 10:18:55.101963 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54852: no serving certificate available for the kubelet" Feb 24 10:18:57 crc kubenswrapper[4755]: I0224 10:18:57.022870 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54864: no serving certificate available for the kubelet" Feb 24 10:18:58 crc kubenswrapper[4755]: I0224 10:18:58.152086 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54878: no serving certificate available for the kubelet" Feb 24 10:18:59 crc kubenswrapper[4755]: I0224 10:18:59.380129 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54888: no serving certificate available for the kubelet" Feb 24 10:19:00 crc kubenswrapper[4755]: I0224 10:19:00.075252 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54902: no serving certificate available for the kubelet" Feb 24 10:19:00 crc kubenswrapper[4755]: I0224 10:19:00.696637 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54908: no serving certificate available for the kubelet" Feb 24 10:19:01 crc kubenswrapper[4755]: I0224 10:19:01.215185 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54922: no serving certificate available for the kubelet" Feb 24 10:19:03 crc kubenswrapper[4755]: I0224 10:19:03.134006 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54928: no serving certificate available for the kubelet" Feb 24 10:19:04 crc kubenswrapper[4755]: I0224 10:19:04.270391 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41182: no serving certificate available for the kubelet" Feb 24 10:19:06 crc kubenswrapper[4755]: I0224 10:19:06.235131 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41196: no serving certificate available for the kubelet" Feb 24 10:19:07 crc kubenswrapper[4755]: I0224 10:19:07.333503 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41210: no serving certificate available for the kubelet" Feb 24 10:19:09 crc kubenswrapper[4755]: I0224 10:19:09.288354 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41226: no serving certificate available for the kubelet" Feb 24 10:19:10 crc kubenswrapper[4755]: I0224 10:19:10.395463 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41234: no serving certificate available for the kubelet" Feb 24 10:19:12 crc kubenswrapper[4755]: I0224 10:19:12.333760 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41238: no serving certificate available for the kubelet" Feb 24 10:19:13 crc kubenswrapper[4755]: I0224 10:19:13.428885 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41250: no serving certificate available for the kubelet" Feb 24 10:19:15 crc kubenswrapper[4755]: I0224 10:19:15.393038 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52694: no serving certificate available for the kubelet" Feb 24 10:19:16 crc kubenswrapper[4755]: I0224 10:19:16.483662 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52706: no serving certificate available for the kubelet" Feb 24 10:19:18 crc kubenswrapper[4755]: I0224 10:19:18.449283 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52722: no serving certificate available for the kubelet" Feb 24 10:19:19 crc kubenswrapper[4755]: I0224 10:19:19.535898 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52732: no serving certificate available for the kubelet" Feb 24 10:19:19 crc kubenswrapper[4755]: I0224 10:19:19.959915 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52738: no serving certificate available for the kubelet" Feb 24 10:19:21 crc kubenswrapper[4755]: I0224 10:19:21.411482 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52748: no serving certificate available for the kubelet" Feb 24 10:19:21 crc kubenswrapper[4755]: I0224 10:19:21.501505 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52754: no serving certificate available for the kubelet" Feb 24 10:19:21 crc kubenswrapper[4755]: I0224 10:19:21.694797 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:19:21 crc kubenswrapper[4755]: I0224 10:19:21.694860 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:19:21 crc kubenswrapper[4755]: I0224 10:19:21.694904 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 10:19:21 crc kubenswrapper[4755]: I0224 10:19:21.695584 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3bcc552811b027de294708fb8fbb284b52f08a7647c99aefef0df363ad7db3d3"} pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 10:19:21 crc kubenswrapper[4755]: I0224 10:19:21.695645 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" containerID="cri-o://3bcc552811b027de294708fb8fbb284b52f08a7647c99aefef0df363ad7db3d3" gracePeriod=600 Feb 24 10:19:22 crc kubenswrapper[4755]: I0224 10:19:22.590048 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52756: no serving certificate available for the kubelet" Feb 24 10:19:22 crc kubenswrapper[4755]: I0224 10:19:22.753874 4755 generic.go:334] "Generic (PLEG): container finished" podID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerID="3bcc552811b027de294708fb8fbb284b52f08a7647c99aefef0df363ad7db3d3" exitCode=0 Feb 24 10:19:22 crc kubenswrapper[4755]: I0224 10:19:22.753938 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerDied","Data":"3bcc552811b027de294708fb8fbb284b52f08a7647c99aefef0df363ad7db3d3"} Feb 24 10:19:22 crc kubenswrapper[4755]: I0224 10:19:22.754310 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733"} Feb 24 10:19:22 crc kubenswrapper[4755]: I0224 10:19:22.754357 4755 scope.go:117] "RemoveContainer" containerID="805e6e8826e15b1db15a276c8f3343a64e680fde18436416c1dae4ce97e5fa1f" Feb 24 10:19:24 crc kubenswrapper[4755]: I0224 10:19:24.551630 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57358: no serving certificate available for the kubelet" Feb 24 10:19:25 crc kubenswrapper[4755]: I0224 10:19:25.644685 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57360: no serving certificate available for the kubelet" Feb 24 10:19:27 crc kubenswrapper[4755]: I0224 10:19:27.611767 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57372: no serving certificate available for the kubelet" Feb 24 10:19:28 crc kubenswrapper[4755]: I0224 10:19:28.707537 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57386: no serving certificate available for the kubelet" Feb 24 10:19:30 crc kubenswrapper[4755]: I0224 10:19:30.671968 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57402: no serving certificate available for the kubelet" Feb 24 10:19:31 crc kubenswrapper[4755]: I0224 10:19:31.813599 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57404: no serving certificate available for the kubelet" Feb 24 10:19:33 crc kubenswrapper[4755]: I0224 10:19:33.711743 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50824: no serving certificate available for the kubelet" Feb 24 10:19:34 crc kubenswrapper[4755]: I0224 10:19:34.880369 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50838: no serving certificate available for the kubelet" Feb 24 10:19:36 crc kubenswrapper[4755]: I0224 10:19:36.751410 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50852: no serving certificate available for the kubelet" Feb 24 10:19:37 crc kubenswrapper[4755]: I0224 10:19:37.930661 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50866: no serving certificate available for the kubelet" Feb 24 10:19:39 crc kubenswrapper[4755]: I0224 10:19:39.792534 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50876: no serving certificate available for the kubelet" Feb 24 10:19:40 crc kubenswrapper[4755]: I0224 10:19:40.988702 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50890: no serving certificate available for the kubelet" Feb 24 10:19:42 crc kubenswrapper[4755]: I0224 10:19:42.850983 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50900: no serving certificate available for the kubelet" Feb 24 10:19:44 crc kubenswrapper[4755]: I0224 10:19:44.031244 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33184: no serving certificate available for the kubelet" Feb 24 10:19:45 crc kubenswrapper[4755]: I0224 10:19:45.906358 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33186: no serving certificate available for the kubelet" Feb 24 10:19:47 crc kubenswrapper[4755]: I0224 10:19:47.090035 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33192: no serving certificate available for the kubelet" Feb 24 10:19:48 crc kubenswrapper[4755]: I0224 10:19:48.950696 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33194: no serving certificate available for the kubelet" Feb 24 10:19:50 crc kubenswrapper[4755]: I0224 10:19:50.143352 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33200: no serving certificate available for the kubelet" Feb 24 10:19:52 crc kubenswrapper[4755]: I0224 10:19:52.009809 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33214: no serving certificate available for the kubelet" Feb 24 10:19:53 crc kubenswrapper[4755]: I0224 10:19:53.193419 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33230: no serving certificate available for the kubelet" Feb 24 10:19:55 crc kubenswrapper[4755]: I0224 10:19:55.068005 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48586: no serving certificate available for the kubelet" Feb 24 10:19:56 crc kubenswrapper[4755]: I0224 10:19:56.249327 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48588: no serving certificate available for the kubelet" Feb 24 10:19:58 crc kubenswrapper[4755]: I0224 10:19:58.115553 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48602: no serving certificate available for the kubelet" Feb 24 10:19:59 crc kubenswrapper[4755]: I0224 10:19:59.311822 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48614: no serving certificate available for the kubelet" Feb 24 10:20:01 crc kubenswrapper[4755]: I0224 10:20:01.023349 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48616: no serving certificate available for the kubelet" Feb 24 10:20:01 crc kubenswrapper[4755]: I0224 10:20:01.161146 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48624: no serving certificate available for the kubelet" Feb 24 10:20:02 crc kubenswrapper[4755]: I0224 10:20:02.377822 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48636: no serving certificate available for the kubelet" Feb 24 10:20:02 crc kubenswrapper[4755]: I0224 10:20:02.513303 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48638: no serving certificate available for the kubelet" Feb 24 10:20:04 crc kubenswrapper[4755]: I0224 10:20:04.207475 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35046: no serving certificate available for the kubelet" Feb 24 10:20:05 crc kubenswrapper[4755]: I0224 10:20:05.436759 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35048: no serving certificate available for the kubelet" Feb 24 10:20:07 crc kubenswrapper[4755]: I0224 10:20:07.246208 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35054: no serving certificate available for the kubelet" Feb 24 10:20:08 crc kubenswrapper[4755]: I0224 10:20:08.499278 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35060: no serving certificate available for the kubelet" Feb 24 10:20:10 crc kubenswrapper[4755]: I0224 10:20:10.285402 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35072: no serving certificate available for the kubelet" Feb 24 10:20:11 crc kubenswrapper[4755]: I0224 10:20:11.549736 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35074: no serving certificate available for the kubelet" Feb 24 10:20:13 crc kubenswrapper[4755]: I0224 10:20:13.389368 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35084: no serving certificate available for the kubelet" Feb 24 10:20:14 crc kubenswrapper[4755]: I0224 10:20:14.616642 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57202: no serving certificate available for the kubelet" Feb 24 10:20:16 crc kubenswrapper[4755]: I0224 10:20:16.445483 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57206: no serving certificate available for the kubelet" Feb 24 10:20:17 crc kubenswrapper[4755]: I0224 10:20:17.653799 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57222: no serving certificate available for the kubelet" Feb 24 10:20:19 crc kubenswrapper[4755]: I0224 10:20:19.498434 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57238: no serving certificate available for the kubelet" Feb 24 10:20:20 crc kubenswrapper[4755]: I0224 10:20:20.708686 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57248: no serving certificate available for the kubelet" Feb 24 10:20:22 crc kubenswrapper[4755]: I0224 10:20:22.551694 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57250: no serving certificate available for the kubelet" Feb 24 10:20:23 crc kubenswrapper[4755]: I0224 10:20:23.765907 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60726: no serving certificate available for the kubelet" Feb 24 10:20:25 crc kubenswrapper[4755]: I0224 10:20:25.617803 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60742: no serving certificate available for the kubelet" Feb 24 10:20:26 crc kubenswrapper[4755]: I0224 10:20:26.827485 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60758: no serving certificate available for the kubelet" Feb 24 10:20:28 crc kubenswrapper[4755]: I0224 10:20:28.663326 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60764: no serving certificate available for the kubelet" Feb 24 10:20:29 crc kubenswrapper[4755]: I0224 10:20:29.886764 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60766: no serving certificate available for the kubelet" Feb 24 10:20:31 crc kubenswrapper[4755]: I0224 10:20:31.720828 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60780: no serving certificate available for the kubelet" Feb 24 10:20:32 crc kubenswrapper[4755]: I0224 10:20:32.942983 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60784: no serving certificate available for the kubelet" Feb 24 10:20:35 crc kubenswrapper[4755]: I0224 10:20:35.025123 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46568: no serving certificate available for the kubelet" Feb 24 10:20:36 crc kubenswrapper[4755]: I0224 10:20:36.001602 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46580: no serving certificate available for the kubelet" Feb 24 10:20:38 crc kubenswrapper[4755]: I0224 10:20:38.080460 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46596: no serving certificate available for the kubelet" Feb 24 10:20:39 crc kubenswrapper[4755]: I0224 10:20:39.058320 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46602: no serving certificate available for the kubelet" Feb 24 10:20:41 crc kubenswrapper[4755]: I0224 10:20:41.138764 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46612: no serving certificate available for the kubelet" Feb 24 10:20:42 crc kubenswrapper[4755]: I0224 10:20:42.106457 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46620: no serving certificate available for the kubelet" Feb 24 10:20:44 crc kubenswrapper[4755]: I0224 10:20:44.187090 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35320: no serving certificate available for the kubelet" Feb 24 10:20:45 crc kubenswrapper[4755]: I0224 10:20:45.168337 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35328: no serving certificate available for the kubelet" Feb 24 10:20:47 crc kubenswrapper[4755]: I0224 10:20:47.233051 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35342: no serving certificate available for the kubelet" Feb 24 10:20:48 crc kubenswrapper[4755]: I0224 10:20:48.214898 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35358: no serving certificate available for the kubelet" Feb 24 10:20:50 crc kubenswrapper[4755]: I0224 10:20:50.292564 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35374: no serving certificate available for the kubelet" Feb 24 10:20:51 crc kubenswrapper[4755]: I0224 10:20:51.277219 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35384: no serving certificate available for the kubelet" Feb 24 10:20:53 crc kubenswrapper[4755]: I0224 10:20:53.346647 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35388: no serving certificate available for the kubelet" Feb 24 10:20:54 crc kubenswrapper[4755]: I0224 10:20:54.340848 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42076: no serving certificate available for the kubelet" Feb 24 10:20:56 crc kubenswrapper[4755]: I0224 10:20:56.437580 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42080: no serving certificate available for the kubelet" Feb 24 10:20:57 crc kubenswrapper[4755]: I0224 10:20:57.381494 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42096: no serving certificate available for the kubelet" Feb 24 10:20:59 crc kubenswrapper[4755]: I0224 10:20:59.475538 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42104: no serving certificate available for the kubelet" Feb 24 10:21:00 crc kubenswrapper[4755]: I0224 10:21:00.413123 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42106: no serving certificate available for the kubelet" Feb 24 10:21:02 crc kubenswrapper[4755]: I0224 10:21:02.513307 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42122: no serving certificate available for the kubelet" Feb 24 10:21:03 crc kubenswrapper[4755]: I0224 10:21:03.464214 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42138: no serving certificate available for the kubelet" Feb 24 10:21:05 crc kubenswrapper[4755]: I0224 10:21:05.570897 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42026: no serving certificate available for the kubelet" Feb 24 10:21:06 crc kubenswrapper[4755]: I0224 10:21:06.505658 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42038: no serving certificate available for the kubelet" Feb 24 10:21:08 crc kubenswrapper[4755]: I0224 10:21:08.634661 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42052: no serving certificate available for the kubelet" Feb 24 10:21:09 crc kubenswrapper[4755]: I0224 10:21:09.546404 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42054: no serving certificate available for the kubelet" Feb 24 10:21:11 crc kubenswrapper[4755]: I0224 10:21:11.681004 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42056: no serving certificate available for the kubelet" Feb 24 10:21:12 crc kubenswrapper[4755]: I0224 10:21:12.581783 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42066: no serving certificate available for the kubelet" Feb 24 10:21:14 crc kubenswrapper[4755]: I0224 10:21:14.735947 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35554: no serving certificate available for the kubelet" Feb 24 10:21:15 crc kubenswrapper[4755]: I0224 10:21:15.640508 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35570: no serving certificate available for the kubelet" Feb 24 10:21:17 crc kubenswrapper[4755]: I0224 10:21:17.790418 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35580: no serving certificate available for the kubelet" Feb 24 10:21:18 crc kubenswrapper[4755]: I0224 10:21:18.696047 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35592: no serving certificate available for the kubelet" Feb 24 10:21:20 crc kubenswrapper[4755]: I0224 10:21:20.844375 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35600: no serving certificate available for the kubelet" Feb 24 10:21:21 crc kubenswrapper[4755]: I0224 10:21:21.762680 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35602: no serving certificate available for the kubelet" Feb 24 10:21:23 crc kubenswrapper[4755]: I0224 10:21:23.093046 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35618: no serving certificate available for the kubelet" Feb 24 10:21:23 crc kubenswrapper[4755]: I0224 10:21:23.893304 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58094: no serving certificate available for the kubelet" Feb 24 10:21:24 crc kubenswrapper[4755]: I0224 10:21:24.586536 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58106: no serving certificate available for the kubelet" Feb 24 10:21:24 crc kubenswrapper[4755]: I0224 10:21:24.836022 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58118: no serving certificate available for the kubelet" Feb 24 10:21:26 crc kubenswrapper[4755]: I0224 10:21:26.943292 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58124: no serving certificate available for the kubelet" Feb 24 10:21:27 crc kubenswrapper[4755]: I0224 10:21:27.888796 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58138: no serving certificate available for the kubelet" Feb 24 10:21:29 crc kubenswrapper[4755]: I0224 10:21:29.997291 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58148: no serving certificate available for the kubelet" Feb 24 10:21:30 crc kubenswrapper[4755]: I0224 10:21:30.940008 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58158: no serving certificate available for the kubelet" Feb 24 10:21:33 crc kubenswrapper[4755]: I0224 10:21:33.048040 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58170: no serving certificate available for the kubelet" Feb 24 10:21:33 crc kubenswrapper[4755]: I0224 10:21:33.985519 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49236: no serving certificate available for the kubelet" Feb 24 10:21:36 crc kubenswrapper[4755]: I0224 10:21:36.090323 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49244: no serving certificate available for the kubelet" Feb 24 10:21:37 crc kubenswrapper[4755]: I0224 10:21:37.027049 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49256: no serving certificate available for the kubelet" Feb 24 10:21:39 crc kubenswrapper[4755]: I0224 10:21:39.139205 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49260: no serving certificate available for the kubelet" Feb 24 10:21:40 crc kubenswrapper[4755]: I0224 10:21:40.064422 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49266: no serving certificate available for the kubelet" Feb 24 10:21:42 crc kubenswrapper[4755]: I0224 10:21:42.193224 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49280: no serving certificate available for the kubelet" Feb 24 10:21:43 crc kubenswrapper[4755]: I0224 10:21:43.122817 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49292: no serving certificate available for the kubelet" Feb 24 10:21:45 crc kubenswrapper[4755]: I0224 10:21:45.257295 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35816: no serving certificate available for the kubelet" Feb 24 10:21:46 crc kubenswrapper[4755]: I0224 10:21:46.159116 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35832: no serving certificate available for the kubelet" Feb 24 10:21:48 crc kubenswrapper[4755]: I0224 10:21:48.316823 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35834: no serving certificate available for the kubelet" Feb 24 10:21:49 crc kubenswrapper[4755]: I0224 10:21:49.206740 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35836: no serving certificate available for the kubelet" Feb 24 10:21:51 crc kubenswrapper[4755]: I0224 10:21:51.384281 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35840: no serving certificate available for the kubelet" Feb 24 10:21:51 crc kubenswrapper[4755]: I0224 10:21:51.695464 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:21:51 crc kubenswrapper[4755]: I0224 10:21:51.695538 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:21:52 crc kubenswrapper[4755]: I0224 10:21:52.266196 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35850: no serving certificate available for the kubelet" Feb 24 10:21:54 crc kubenswrapper[4755]: I0224 10:21:54.447987 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34262: no serving certificate available for the kubelet" Feb 24 10:21:55 crc kubenswrapper[4755]: I0224 10:21:55.311258 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34272: no serving certificate available for the kubelet" Feb 24 10:21:57 crc kubenswrapper[4755]: I0224 10:21:57.510212 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34284: no serving certificate available for the kubelet" Feb 24 10:21:58 crc kubenswrapper[4755]: I0224 10:21:58.372005 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34294: no serving certificate available for the kubelet" Feb 24 10:22:00 crc kubenswrapper[4755]: I0224 10:22:00.570283 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34298: no serving certificate available for the kubelet" Feb 24 10:22:01 crc kubenswrapper[4755]: I0224 10:22:01.435053 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34304: no serving certificate available for the kubelet" Feb 24 10:22:03 crc kubenswrapper[4755]: I0224 10:22:03.609751 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34314: no serving certificate available for the kubelet" Feb 24 10:22:04 crc kubenswrapper[4755]: I0224 10:22:04.486867 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35004: no serving certificate available for the kubelet" Feb 24 10:22:06 crc kubenswrapper[4755]: I0224 10:22:06.655096 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35018: no serving certificate available for the kubelet" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.551222 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35026: no serving certificate available for the kubelet" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.640614 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t2qqz"] Feb 24 10:22:07 crc kubenswrapper[4755]: E0224 10:22:07.641746 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667faef0-2cfe-417b-9872-a36c104256aa" containerName="init" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.641914 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="667faef0-2cfe-417b-9872-a36c104256aa" containerName="init" Feb 24 10:22:07 crc kubenswrapper[4755]: E0224 10:22:07.642056 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667faef0-2cfe-417b-9872-a36c104256aa" containerName="dnsmasq-dns" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.642211 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="667faef0-2cfe-417b-9872-a36c104256aa" containerName="dnsmasq-dns" Feb 24 10:22:07 crc kubenswrapper[4755]: E0224 10:22:07.642387 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57dba48-36c8-4aa9-a0ad-4c040d0d29af" containerName="ovn-config" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.642479 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57dba48-36c8-4aa9-a0ad-4c040d0d29af" containerName="ovn-config" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.642794 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f57dba48-36c8-4aa9-a0ad-4c040d0d29af" containerName="ovn-config" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.642899 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="667faef0-2cfe-417b-9872-a36c104256aa" containerName="dnsmasq-dns" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.644775 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.664794 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t2qqz"] Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.714283 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx2rx\" (UniqueName: \"kubernetes.io/projected/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-kube-api-access-lx2rx\") pod \"community-operators-t2qqz\" (UID: \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\") " pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.714349 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-utilities\") pod \"community-operators-t2qqz\" (UID: \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\") " pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.714391 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-catalog-content\") pod \"community-operators-t2qqz\" (UID: \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\") " pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.815749 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2rx\" (UniqueName: \"kubernetes.io/projected/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-kube-api-access-lx2rx\") pod \"community-operators-t2qqz\" (UID: \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\") " pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.816168 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-utilities\") pod \"community-operators-t2qqz\" (UID: \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\") " pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.816206 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-catalog-content\") pod \"community-operators-t2qqz\" (UID: \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\") " pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.816634 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-catalog-content\") pod \"community-operators-t2qqz\" (UID: \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\") " pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.816707 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-utilities\") pod \"community-operators-t2qqz\" (UID: \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\") " pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.838772 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2rx\" (UniqueName: \"kubernetes.io/projected/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-kube-api-access-lx2rx\") pod \"community-operators-t2qqz\" (UID: \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\") " pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:07 crc kubenswrapper[4755]: I0224 10:22:07.964852 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:08 crc kubenswrapper[4755]: I0224 10:22:08.456037 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t2qqz"] Feb 24 10:22:09 crc kubenswrapper[4755]: I0224 10:22:09.364722 4755 generic.go:334] "Generic (PLEG): container finished" podID="222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" containerID="3efb97be4be0aebea710e4dba349ff3427ef7c88434eac42acd462f1bb96e060" exitCode=0 Feb 24 10:22:09 crc kubenswrapper[4755]: I0224 10:22:09.364763 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2qqz" event={"ID":"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e","Type":"ContainerDied","Data":"3efb97be4be0aebea710e4dba349ff3427ef7c88434eac42acd462f1bb96e060"} Feb 24 10:22:09 crc kubenswrapper[4755]: I0224 10:22:09.364788 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2qqz" event={"ID":"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e","Type":"ContainerStarted","Data":"12c247efe6fd0bc656a3d99400db51c28e28f68b4435e4108d3a48ebb6f737fd"} Feb 24 10:22:09 crc kubenswrapper[4755]: I0224 10:22:09.367658 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 10:22:09 crc kubenswrapper[4755]: I0224 10:22:09.438047 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" probeResult="failure" output=< Feb 24 10:22:09 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:22:09 crc kubenswrapper[4755]: > Feb 24 10:22:09 crc kubenswrapper[4755]: I0224 10:22:09.438176 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:22:09 crc kubenswrapper[4755]: I0224 10:22:09.438905 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"c23b00c172415a5d4404660869d9355bcca364f8424ce1b7961a9b52aedf43de"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:22:09 crc kubenswrapper[4755]: I0224 10:22:09.506607 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" containerID="cri-o://c23b00c172415a5d4404660869d9355bcca364f8424ce1b7961a9b52aedf43de" gracePeriod=30 Feb 24 10:22:09 crc kubenswrapper[4755]: I0224 10:22:09.702298 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35036: no serving certificate available for the kubelet" Feb 24 10:22:10 crc kubenswrapper[4755]: I0224 10:22:10.375599 4755 generic.go:334] "Generic (PLEG): container finished" podID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerID="c23b00c172415a5d4404660869d9355bcca364f8424ce1b7961a9b52aedf43de" exitCode=143 Feb 24 10:22:10 crc kubenswrapper[4755]: I0224 10:22:10.375681 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerDied","Data":"c23b00c172415a5d4404660869d9355bcca364f8424ce1b7961a9b52aedf43de"} Feb 24 10:22:10 crc kubenswrapper[4755]: I0224 10:22:10.376202 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerStarted","Data":"bb6268125e049ec7a775938b07551bdc506873d3b7cecf970cc72c40925a0d95"} Feb 24 10:22:10 crc kubenswrapper[4755]: I0224 10:22:10.380507 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2qqz" event={"ID":"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e","Type":"ContainerStarted","Data":"bc55e2a2021c1654ba761d1d7504ff51aa67e87f2e43327807c08a9114eadf6d"} Feb 24 10:22:10 crc kubenswrapper[4755]: I0224 10:22:10.596360 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35044: no serving certificate available for the kubelet" Feb 24 10:22:11 crc kubenswrapper[4755]: I0224 10:22:11.390428 4755 generic.go:334] "Generic (PLEG): container finished" podID="222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" containerID="bc55e2a2021c1654ba761d1d7504ff51aa67e87f2e43327807c08a9114eadf6d" exitCode=0 Feb 24 10:22:11 crc kubenswrapper[4755]: I0224 10:22:11.390467 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2qqz" event={"ID":"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e","Type":"ContainerDied","Data":"bc55e2a2021c1654ba761d1d7504ff51aa67e87f2e43327807c08a9114eadf6d"} Feb 24 10:22:12 crc kubenswrapper[4755]: I0224 10:22:12.404235 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2qqz" event={"ID":"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e","Type":"ContainerStarted","Data":"c1ddcfc49adb3d573f78fa7697db1726acaa2f6d247aac879e24807064ea9495"} Feb 24 10:22:12 crc kubenswrapper[4755]: I0224 10:22:12.429022 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t2qqz" podStartSLOduration=2.921918371 podStartE2EDuration="5.429004123s" podCreationTimestamp="2026-02-24 10:22:07 +0000 UTC" firstStartedPulling="2026-02-24 10:22:09.367208986 +0000 UTC m=+1633.823731569" lastFinishedPulling="2026-02-24 10:22:11.874294768 +0000 UTC m=+1636.330817321" observedRunningTime="2026-02-24 10:22:12.4240913 +0000 UTC m=+1636.880613873" watchObservedRunningTime="2026-02-24 10:22:12.429004123 +0000 UTC m=+1636.885526676" Feb 24 10:22:12 crc kubenswrapper[4755]: I0224 10:22:12.752162 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35050: no serving certificate available for the kubelet" Feb 24 10:22:12 crc kubenswrapper[4755]: I0224 10:22:12.763243 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" probeResult="failure" output=< Feb 24 10:22:12 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:22:12 crc kubenswrapper[4755]: > Feb 24 10:22:12 crc kubenswrapper[4755]: I0224 10:22:12.763345 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:22:12 crc kubenswrapper[4755]: I0224 10:22:12.764127 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"2e538d3c9647af0ce3e2cda7edd5083e55d8a51afe1081319c126ae17e558431"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:22:12 crc kubenswrapper[4755]: I0224 10:22:12.836882 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" containerID="cri-o://2e538d3c9647af0ce3e2cda7edd5083e55d8a51afe1081319c126ae17e558431" gracePeriod=30 Feb 24 10:22:13 crc kubenswrapper[4755]: I0224 10:22:13.416151 4755 generic.go:334] "Generic (PLEG): container finished" podID="2f320527-691f-48e9-a243-f60bc805da39" containerID="2e538d3c9647af0ce3e2cda7edd5083e55d8a51afe1081319c126ae17e558431" exitCode=143 Feb 24 10:22:13 crc kubenswrapper[4755]: I0224 10:22:13.416262 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerDied","Data":"2e538d3c9647af0ce3e2cda7edd5083e55d8a51afe1081319c126ae17e558431"} Feb 24 10:22:13 crc kubenswrapper[4755]: I0224 10:22:13.416332 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerStarted","Data":"d9b418c6634bf36064598a0e1cfcb65663cea0ac5c0b551774275cd0ca618921"} Feb 24 10:22:13 crc kubenswrapper[4755]: I0224 10:22:13.644230 4755 ???:1] "http: TLS handshake error from 192.168.126.11:35058: no serving certificate available for the kubelet" Feb 24 10:22:15 crc kubenswrapper[4755]: I0224 10:22:15.801084 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46140: no serving certificate available for the kubelet" Feb 24 10:22:16 crc kubenswrapper[4755]: I0224 10:22:16.694489 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46156: no serving certificate available for the kubelet" Feb 24 10:22:17 crc kubenswrapper[4755]: I0224 10:22:17.965411 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:17 crc kubenswrapper[4755]: I0224 10:22:17.965493 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:18 crc kubenswrapper[4755]: I0224 10:22:18.036534 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:18 crc kubenswrapper[4755]: I0224 10:22:18.538317 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:18 crc kubenswrapper[4755]: I0224 10:22:18.606429 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t2qqz"] Feb 24 10:22:18 crc kubenswrapper[4755]: I0224 10:22:18.844488 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46172: no serving certificate available for the kubelet" Feb 24 10:22:19 crc kubenswrapper[4755]: I0224 10:22:19.745705 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46178: no serving certificate available for the kubelet" Feb 24 10:22:19 crc kubenswrapper[4755]: I0224 10:22:19.887906 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:22:19 crc kubenswrapper[4755]: I0224 10:22:19.887992 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 10:22:20 crc kubenswrapper[4755]: I0224 10:22:20.480227 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t2qqz" podUID="222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" containerName="registry-server" containerID="cri-o://c1ddcfc49adb3d573f78fa7697db1726acaa2f6d247aac879e24807064ea9495" gracePeriod=2 Feb 24 10:22:20 crc kubenswrapper[4755]: I0224 10:22:20.967429 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.051429 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-utilities\") pod \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\" (UID: \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\") " Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.051509 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-catalog-content\") pod \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\" (UID: \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\") " Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.051556 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx2rx\" (UniqueName: \"kubernetes.io/projected/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-kube-api-access-lx2rx\") pod \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\" (UID: \"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e\") " Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.054336 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-utilities" (OuterVolumeSpecName: "utilities") pod "222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" (UID: "222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.063844 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-kube-api-access-lx2rx" (OuterVolumeSpecName: "kube-api-access-lx2rx") pod "222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" (UID: "222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e"). InnerVolumeSpecName "kube-api-access-lx2rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.120464 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" (UID: "222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.153548 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.153579 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.153596 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx2rx\" (UniqueName: \"kubernetes.io/projected/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e-kube-api-access-lx2rx\") on node \"crc\" DevicePath \"\"" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.289322 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.289955 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.493190 4755 generic.go:334] "Generic (PLEG): container finished" podID="222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" containerID="c1ddcfc49adb3d573f78fa7697db1726acaa2f6d247aac879e24807064ea9495" exitCode=0 Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.493252 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t2qqz" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.493275 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2qqz" event={"ID":"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e","Type":"ContainerDied","Data":"c1ddcfc49adb3d573f78fa7697db1726acaa2f6d247aac879e24807064ea9495"} Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.493857 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2qqz" event={"ID":"222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e","Type":"ContainerDied","Data":"12c247efe6fd0bc656a3d99400db51c28e28f68b4435e4108d3a48ebb6f737fd"} Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.493896 4755 scope.go:117] "RemoveContainer" containerID="c1ddcfc49adb3d573f78fa7697db1726acaa2f6d247aac879e24807064ea9495" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.521387 4755 scope.go:117] "RemoveContainer" containerID="bc55e2a2021c1654ba761d1d7504ff51aa67e87f2e43327807c08a9114eadf6d" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.551355 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t2qqz"] Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.565595 4755 scope.go:117] "RemoveContainer" containerID="3efb97be4be0aebea710e4dba349ff3427ef7c88434eac42acd462f1bb96e060" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.568170 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t2qqz"] Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.615184 4755 scope.go:117] "RemoveContainer" containerID="c1ddcfc49adb3d573f78fa7697db1726acaa2f6d247aac879e24807064ea9495" Feb 24 10:22:21 crc kubenswrapper[4755]: E0224 10:22:21.616128 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ddcfc49adb3d573f78fa7697db1726acaa2f6d247aac879e24807064ea9495\": container with ID starting with c1ddcfc49adb3d573f78fa7697db1726acaa2f6d247aac879e24807064ea9495 not found: ID does not exist" containerID="c1ddcfc49adb3d573f78fa7697db1726acaa2f6d247aac879e24807064ea9495" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.616192 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ddcfc49adb3d573f78fa7697db1726acaa2f6d247aac879e24807064ea9495"} err="failed to get container status \"c1ddcfc49adb3d573f78fa7697db1726acaa2f6d247aac879e24807064ea9495\": rpc error: code = NotFound desc = could not find container \"c1ddcfc49adb3d573f78fa7697db1726acaa2f6d247aac879e24807064ea9495\": container with ID starting with c1ddcfc49adb3d573f78fa7697db1726acaa2f6d247aac879e24807064ea9495 not found: ID does not exist" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.616233 4755 scope.go:117] "RemoveContainer" containerID="bc55e2a2021c1654ba761d1d7504ff51aa67e87f2e43327807c08a9114eadf6d" Feb 24 10:22:21 crc kubenswrapper[4755]: E0224 10:22:21.616728 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc55e2a2021c1654ba761d1d7504ff51aa67e87f2e43327807c08a9114eadf6d\": container with ID starting with bc55e2a2021c1654ba761d1d7504ff51aa67e87f2e43327807c08a9114eadf6d not found: ID does not exist" containerID="bc55e2a2021c1654ba761d1d7504ff51aa67e87f2e43327807c08a9114eadf6d" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.616758 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc55e2a2021c1654ba761d1d7504ff51aa67e87f2e43327807c08a9114eadf6d"} err="failed to get container status \"bc55e2a2021c1654ba761d1d7504ff51aa67e87f2e43327807c08a9114eadf6d\": rpc error: code = NotFound desc = could not find container \"bc55e2a2021c1654ba761d1d7504ff51aa67e87f2e43327807c08a9114eadf6d\": container with ID starting with bc55e2a2021c1654ba761d1d7504ff51aa67e87f2e43327807c08a9114eadf6d not found: ID does not exist" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.616781 4755 scope.go:117] "RemoveContainer" containerID="3efb97be4be0aebea710e4dba349ff3427ef7c88434eac42acd462f1bb96e060" Feb 24 10:22:21 crc kubenswrapper[4755]: E0224 10:22:21.617181 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3efb97be4be0aebea710e4dba349ff3427ef7c88434eac42acd462f1bb96e060\": container with ID starting with 3efb97be4be0aebea710e4dba349ff3427ef7c88434eac42acd462f1bb96e060 not found: ID does not exist" containerID="3efb97be4be0aebea710e4dba349ff3427ef7c88434eac42acd462f1bb96e060" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.617202 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3efb97be4be0aebea710e4dba349ff3427ef7c88434eac42acd462f1bb96e060"} err="failed to get container status \"3efb97be4be0aebea710e4dba349ff3427ef7c88434eac42acd462f1bb96e060\": rpc error: code = NotFound desc = could not find container \"3efb97be4be0aebea710e4dba349ff3427ef7c88434eac42acd462f1bb96e060\": container with ID starting with 3efb97be4be0aebea710e4dba349ff3427ef7c88434eac42acd462f1bb96e060 not found: ID does not exist" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.695582 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.695643 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:22:21 crc kubenswrapper[4755]: I0224 10:22:21.891697 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46186: no serving certificate available for the kubelet" Feb 24 10:22:22 crc kubenswrapper[4755]: I0224 10:22:22.328901 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" path="/var/lib/kubelet/pods/222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e/volumes" Feb 24 10:22:22 crc kubenswrapper[4755]: I0224 10:22:22.791885 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46188: no serving certificate available for the kubelet" Feb 24 10:22:24 crc kubenswrapper[4755]: I0224 10:22:24.953904 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49796: no serving certificate available for the kubelet" Feb 24 10:22:25 crc kubenswrapper[4755]: I0224 10:22:25.895738 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49802: no serving certificate available for the kubelet" Feb 24 10:22:28 crc kubenswrapper[4755]: I0224 10:22:28.009018 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49808: no serving certificate available for the kubelet" Feb 24 10:22:28 crc kubenswrapper[4755]: I0224 10:22:28.953886 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49814: no serving certificate available for the kubelet" Feb 24 10:22:31 crc kubenswrapper[4755]: I0224 10:22:31.058775 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49830: no serving certificate available for the kubelet" Feb 24 10:22:32 crc kubenswrapper[4755]: I0224 10:22:32.015837 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49844: no serving certificate available for the kubelet" Feb 24 10:22:34 crc kubenswrapper[4755]: I0224 10:22:34.125409 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56472: no serving certificate available for the kubelet" Feb 24 10:22:35 crc kubenswrapper[4755]: I0224 10:22:35.240531 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56484: no serving certificate available for the kubelet" Feb 24 10:22:37 crc kubenswrapper[4755]: I0224 10:22:37.174901 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56498: no serving certificate available for the kubelet" Feb 24 10:22:38 crc kubenswrapper[4755]: I0224 10:22:38.286173 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56510: no serving certificate available for the kubelet" Feb 24 10:22:40 crc kubenswrapper[4755]: I0224 10:22:40.213795 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56518: no serving certificate available for the kubelet" Feb 24 10:22:41 crc kubenswrapper[4755]: I0224 10:22:41.416547 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56520: no serving certificate available for the kubelet" Feb 24 10:22:43 crc kubenswrapper[4755]: I0224 10:22:43.271013 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56522: no serving certificate available for the kubelet" Feb 24 10:22:44 crc kubenswrapper[4755]: I0224 10:22:44.450399 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38766: no serving certificate available for the kubelet" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.315323 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7lz4g"] Feb 24 10:22:45 crc kubenswrapper[4755]: E0224 10:22:45.316251 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" containerName="extract-utilities" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.316367 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" containerName="extract-utilities" Feb 24 10:22:45 crc kubenswrapper[4755]: E0224 10:22:45.316502 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" containerName="registry-server" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.316593 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" containerName="registry-server" Feb 24 10:22:45 crc kubenswrapper[4755]: E0224 10:22:45.316675 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" containerName="extract-content" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.316761 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" containerName="extract-content" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.317027 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="222b1ad6-6e5d-4f70-bf5d-6ad2e0d4339e" containerName="registry-server" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.318831 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.345853 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7lz4g"] Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.401868 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56590428-885d-4f1e-a9d7-d4066eb72787-catalog-content\") pod \"redhat-operators-7lz4g\" (UID: \"56590428-885d-4f1e-a9d7-d4066eb72787\") " pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.402314 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56590428-885d-4f1e-a9d7-d4066eb72787-utilities\") pod \"redhat-operators-7lz4g\" (UID: \"56590428-885d-4f1e-a9d7-d4066eb72787\") " pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.402415 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4lcm\" (UniqueName: \"kubernetes.io/projected/56590428-885d-4f1e-a9d7-d4066eb72787-kube-api-access-j4lcm\") pod \"redhat-operators-7lz4g\" (UID: \"56590428-885d-4f1e-a9d7-d4066eb72787\") " pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.503577 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56590428-885d-4f1e-a9d7-d4066eb72787-catalog-content\") pod \"redhat-operators-7lz4g\" (UID: \"56590428-885d-4f1e-a9d7-d4066eb72787\") " pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.503675 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56590428-885d-4f1e-a9d7-d4066eb72787-utilities\") pod \"redhat-operators-7lz4g\" (UID: \"56590428-885d-4f1e-a9d7-d4066eb72787\") " pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.503708 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4lcm\" (UniqueName: \"kubernetes.io/projected/56590428-885d-4f1e-a9d7-d4066eb72787-kube-api-access-j4lcm\") pod \"redhat-operators-7lz4g\" (UID: \"56590428-885d-4f1e-a9d7-d4066eb72787\") " pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.504408 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56590428-885d-4f1e-a9d7-d4066eb72787-catalog-content\") pod \"redhat-operators-7lz4g\" (UID: \"56590428-885d-4f1e-a9d7-d4066eb72787\") " pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.504798 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56590428-885d-4f1e-a9d7-d4066eb72787-utilities\") pod \"redhat-operators-7lz4g\" (UID: \"56590428-885d-4f1e-a9d7-d4066eb72787\") " pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.526611 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4lcm\" (UniqueName: \"kubernetes.io/projected/56590428-885d-4f1e-a9d7-d4066eb72787-kube-api-access-j4lcm\") pod \"redhat-operators-7lz4g\" (UID: \"56590428-885d-4f1e-a9d7-d4066eb72787\") " pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:22:45 crc kubenswrapper[4755]: I0224 10:22:45.658251 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:22:46 crc kubenswrapper[4755]: I0224 10:22:46.134202 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7lz4g"] Feb 24 10:22:46 crc kubenswrapper[4755]: I0224 10:22:46.315115 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38782: no serving certificate available for the kubelet" Feb 24 10:22:46 crc kubenswrapper[4755]: I0224 10:22:46.741688 4755 generic.go:334] "Generic (PLEG): container finished" podID="56590428-885d-4f1e-a9d7-d4066eb72787" containerID="79f6b276e397631de66943d17b96fcf612e154b0ded95b8ef5016ca032973a0e" exitCode=0 Feb 24 10:22:46 crc kubenswrapper[4755]: I0224 10:22:46.741738 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lz4g" event={"ID":"56590428-885d-4f1e-a9d7-d4066eb72787","Type":"ContainerDied","Data":"79f6b276e397631de66943d17b96fcf612e154b0ded95b8ef5016ca032973a0e"} Feb 24 10:22:46 crc kubenswrapper[4755]: I0224 10:22:46.741799 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lz4g" event={"ID":"56590428-885d-4f1e-a9d7-d4066eb72787","Type":"ContainerStarted","Data":"61f637cb196838edcefbd499c9e04f9806c6aac2221eccfc4f0dfaea305d5fd4"} Feb 24 10:22:47 crc kubenswrapper[4755]: I0224 10:22:47.517934 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38790: no serving certificate available for the kubelet" Feb 24 10:22:47 crc kubenswrapper[4755]: I0224 10:22:47.752635 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lz4g" event={"ID":"56590428-885d-4f1e-a9d7-d4066eb72787","Type":"ContainerStarted","Data":"93c79f569a76e53f5d6fd34360cc8910175f07da74981ad1ed214815aacf8619"} Feb 24 10:22:49 crc kubenswrapper[4755]: I0224 10:22:49.368871 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38802: no serving certificate available for the kubelet" Feb 24 10:22:50 crc kubenswrapper[4755]: I0224 10:22:50.573014 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38804: no serving certificate available for the kubelet" Feb 24 10:22:50 crc kubenswrapper[4755]: I0224 10:22:50.781394 4755 generic.go:334] "Generic (PLEG): container finished" podID="56590428-885d-4f1e-a9d7-d4066eb72787" containerID="93c79f569a76e53f5d6fd34360cc8910175f07da74981ad1ed214815aacf8619" exitCode=0 Feb 24 10:22:50 crc kubenswrapper[4755]: I0224 10:22:50.781501 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lz4g" event={"ID":"56590428-885d-4f1e-a9d7-d4066eb72787","Type":"ContainerDied","Data":"93c79f569a76e53f5d6fd34360cc8910175f07da74981ad1ed214815aacf8619"} Feb 24 10:22:51 crc kubenswrapper[4755]: I0224 10:22:51.694549 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:22:51 crc kubenswrapper[4755]: I0224 10:22:51.694646 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:22:51 crc kubenswrapper[4755]: I0224 10:22:51.694732 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 10:22:51 crc kubenswrapper[4755]: I0224 10:22:51.695758 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733"} pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 10:22:51 crc kubenswrapper[4755]: I0224 10:22:51.695876 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" containerID="cri-o://70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" gracePeriod=600 Feb 24 10:22:51 crc kubenswrapper[4755]: I0224 10:22:51.793143 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lz4g" event={"ID":"56590428-885d-4f1e-a9d7-d4066eb72787","Type":"ContainerStarted","Data":"7767dcc5980bb71adf52205158f4e3af0e2f32c17137fe4b92f90f4338667d87"} Feb 24 10:22:51 crc kubenswrapper[4755]: E0224 10:22:51.818104 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:22:51 crc kubenswrapper[4755]: I0224 10:22:51.850098 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7lz4g" podStartSLOduration=2.081238574 podStartE2EDuration="6.850056098s" podCreationTimestamp="2026-02-24 10:22:45 +0000 UTC" firstStartedPulling="2026-02-24 10:22:46.743398239 +0000 UTC m=+1671.199920782" lastFinishedPulling="2026-02-24 10:22:51.512215763 +0000 UTC m=+1675.968738306" observedRunningTime="2026-02-24 10:22:51.835394176 +0000 UTC m=+1676.291916739" watchObservedRunningTime="2026-02-24 10:22:51.850056098 +0000 UTC m=+1676.306578651" Feb 24 10:22:52 crc kubenswrapper[4755]: I0224 10:22:52.411292 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38818: no serving certificate available for the kubelet" Feb 24 10:22:52 crc kubenswrapper[4755]: I0224 10:22:52.804101 4755 generic.go:334] "Generic (PLEG): container finished" podID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" exitCode=0 Feb 24 10:22:52 crc kubenswrapper[4755]: I0224 10:22:52.804152 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerDied","Data":"70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733"} Feb 24 10:22:52 crc kubenswrapper[4755]: I0224 10:22:52.804191 4755 scope.go:117] "RemoveContainer" containerID="3bcc552811b027de294708fb8fbb284b52f08a7647c99aefef0df363ad7db3d3" Feb 24 10:22:52 crc kubenswrapper[4755]: I0224 10:22:52.804845 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:22:52 crc kubenswrapper[4755]: E0224 10:22:52.805144 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:22:53 crc kubenswrapper[4755]: I0224 10:22:53.626109 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38832: no serving certificate available for the kubelet" Feb 24 10:22:55 crc kubenswrapper[4755]: I0224 10:22:55.465376 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57570: no serving certificate available for the kubelet" Feb 24 10:22:55 crc kubenswrapper[4755]: I0224 10:22:55.658978 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:22:55 crc kubenswrapper[4755]: I0224 10:22:55.659040 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:22:56 crc kubenswrapper[4755]: I0224 10:22:56.666788 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57582: no serving certificate available for the kubelet" Feb 24 10:22:56 crc kubenswrapper[4755]: I0224 10:22:56.698901 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7lz4g" podUID="56590428-885d-4f1e-a9d7-d4066eb72787" containerName="registry-server" probeResult="failure" output=< Feb 24 10:22:56 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Feb 24 10:22:56 crc kubenswrapper[4755]: > Feb 24 10:22:58 crc kubenswrapper[4755]: I0224 10:22:58.518468 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57588: no serving certificate available for the kubelet" Feb 24 10:22:59 crc kubenswrapper[4755]: I0224 10:22:59.716275 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57596: no serving certificate available for the kubelet" Feb 24 10:23:01 crc kubenswrapper[4755]: I0224 10:23:01.571492 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57604: no serving certificate available for the kubelet" Feb 24 10:23:02 crc kubenswrapper[4755]: I0224 10:23:02.758607 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57618: no serving certificate available for the kubelet" Feb 24 10:23:04 crc kubenswrapper[4755]: I0224 10:23:04.317293 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:23:04 crc kubenswrapper[4755]: E0224 10:23:04.318018 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:23:04 crc kubenswrapper[4755]: I0224 10:23:04.611643 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48320: no serving certificate available for the kubelet" Feb 24 10:23:05 crc kubenswrapper[4755]: I0224 10:23:05.728504 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:23:05 crc kubenswrapper[4755]: I0224 10:23:05.799809 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48328: no serving certificate available for the kubelet" Feb 24 10:23:05 crc kubenswrapper[4755]: I0224 10:23:05.812776 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:23:05 crc kubenswrapper[4755]: I0224 10:23:05.974207 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7lz4g"] Feb 24 10:23:06 crc kubenswrapper[4755]: I0224 10:23:06.937687 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7lz4g" podUID="56590428-885d-4f1e-a9d7-d4066eb72787" containerName="registry-server" containerID="cri-o://7767dcc5980bb71adf52205158f4e3af0e2f32c17137fe4b92f90f4338667d87" gracePeriod=2 Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.479331 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.640356 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56590428-885d-4f1e-a9d7-d4066eb72787-catalog-content\") pod \"56590428-885d-4f1e-a9d7-d4066eb72787\" (UID: \"56590428-885d-4f1e-a9d7-d4066eb72787\") " Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.640416 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4lcm\" (UniqueName: \"kubernetes.io/projected/56590428-885d-4f1e-a9d7-d4066eb72787-kube-api-access-j4lcm\") pod \"56590428-885d-4f1e-a9d7-d4066eb72787\" (UID: \"56590428-885d-4f1e-a9d7-d4066eb72787\") " Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.640544 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56590428-885d-4f1e-a9d7-d4066eb72787-utilities\") pod \"56590428-885d-4f1e-a9d7-d4066eb72787\" (UID: \"56590428-885d-4f1e-a9d7-d4066eb72787\") " Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.641711 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56590428-885d-4f1e-a9d7-d4066eb72787-utilities" (OuterVolumeSpecName: "utilities") pod "56590428-885d-4f1e-a9d7-d4066eb72787" (UID: "56590428-885d-4f1e-a9d7-d4066eb72787"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.646658 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56590428-885d-4f1e-a9d7-d4066eb72787-kube-api-access-j4lcm" (OuterVolumeSpecName: "kube-api-access-j4lcm") pod "56590428-885d-4f1e-a9d7-d4066eb72787" (UID: "56590428-885d-4f1e-a9d7-d4066eb72787"). InnerVolumeSpecName "kube-api-access-j4lcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.650830 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48336: no serving certificate available for the kubelet" Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.742132 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56590428-885d-4f1e-a9d7-d4066eb72787-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.742179 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4lcm\" (UniqueName: \"kubernetes.io/projected/56590428-885d-4f1e-a9d7-d4066eb72787-kube-api-access-j4lcm\") on node \"crc\" DevicePath \"\"" Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.755671 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56590428-885d-4f1e-a9d7-d4066eb72787-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56590428-885d-4f1e-a9d7-d4066eb72787" (UID: "56590428-885d-4f1e-a9d7-d4066eb72787"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.843402 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56590428-885d-4f1e-a9d7-d4066eb72787-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.952150 4755 generic.go:334] "Generic (PLEG): container finished" podID="56590428-885d-4f1e-a9d7-d4066eb72787" containerID="7767dcc5980bb71adf52205158f4e3af0e2f32c17137fe4b92f90f4338667d87" exitCode=0 Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.952228 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lz4g" event={"ID":"56590428-885d-4f1e-a9d7-d4066eb72787","Type":"ContainerDied","Data":"7767dcc5980bb71adf52205158f4e3af0e2f32c17137fe4b92f90f4338667d87"} Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.952237 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lz4g" Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.952309 4755 scope.go:117] "RemoveContainer" containerID="7767dcc5980bb71adf52205158f4e3af0e2f32c17137fe4b92f90f4338667d87" Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.952285 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lz4g" event={"ID":"56590428-885d-4f1e-a9d7-d4066eb72787","Type":"ContainerDied","Data":"61f637cb196838edcefbd499c9e04f9806c6aac2221eccfc4f0dfaea305d5fd4"} Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.978982 4755 scope.go:117] "RemoveContainer" containerID="93c79f569a76e53f5d6fd34360cc8910175f07da74981ad1ed214815aacf8619" Feb 24 10:23:07 crc kubenswrapper[4755]: I0224 10:23:07.999321 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7lz4g"] Feb 24 10:23:08 crc kubenswrapper[4755]: I0224 10:23:08.006870 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7lz4g"] Feb 24 10:23:08 crc kubenswrapper[4755]: I0224 10:23:08.019460 4755 scope.go:117] "RemoveContainer" containerID="79f6b276e397631de66943d17b96fcf612e154b0ded95b8ef5016ca032973a0e" Feb 24 10:23:08 crc kubenswrapper[4755]: I0224 10:23:08.048255 4755 scope.go:117] "RemoveContainer" containerID="7767dcc5980bb71adf52205158f4e3af0e2f32c17137fe4b92f90f4338667d87" Feb 24 10:23:08 crc kubenswrapper[4755]: E0224 10:23:08.048679 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7767dcc5980bb71adf52205158f4e3af0e2f32c17137fe4b92f90f4338667d87\": container with ID starting with 7767dcc5980bb71adf52205158f4e3af0e2f32c17137fe4b92f90f4338667d87 not found: ID does not exist" containerID="7767dcc5980bb71adf52205158f4e3af0e2f32c17137fe4b92f90f4338667d87" Feb 24 10:23:08 crc kubenswrapper[4755]: I0224 10:23:08.048728 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7767dcc5980bb71adf52205158f4e3af0e2f32c17137fe4b92f90f4338667d87"} err="failed to get container status \"7767dcc5980bb71adf52205158f4e3af0e2f32c17137fe4b92f90f4338667d87\": rpc error: code = NotFound desc = could not find container \"7767dcc5980bb71adf52205158f4e3af0e2f32c17137fe4b92f90f4338667d87\": container with ID starting with 7767dcc5980bb71adf52205158f4e3af0e2f32c17137fe4b92f90f4338667d87 not found: ID does not exist" Feb 24 10:23:08 crc kubenswrapper[4755]: I0224 10:23:08.048758 4755 scope.go:117] "RemoveContainer" containerID="93c79f569a76e53f5d6fd34360cc8910175f07da74981ad1ed214815aacf8619" Feb 24 10:23:08 crc kubenswrapper[4755]: E0224 10:23:08.049159 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93c79f569a76e53f5d6fd34360cc8910175f07da74981ad1ed214815aacf8619\": container with ID starting with 93c79f569a76e53f5d6fd34360cc8910175f07da74981ad1ed214815aacf8619 not found: ID does not exist" containerID="93c79f569a76e53f5d6fd34360cc8910175f07da74981ad1ed214815aacf8619" Feb 24 10:23:08 crc kubenswrapper[4755]: I0224 10:23:08.049194 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93c79f569a76e53f5d6fd34360cc8910175f07da74981ad1ed214815aacf8619"} err="failed to get container status \"93c79f569a76e53f5d6fd34360cc8910175f07da74981ad1ed214815aacf8619\": rpc error: code = NotFound desc = could not find container \"93c79f569a76e53f5d6fd34360cc8910175f07da74981ad1ed214815aacf8619\": container with ID starting with 93c79f569a76e53f5d6fd34360cc8910175f07da74981ad1ed214815aacf8619 not found: ID does not exist" Feb 24 10:23:08 crc kubenswrapper[4755]: I0224 10:23:08.049237 4755 scope.go:117] "RemoveContainer" containerID="79f6b276e397631de66943d17b96fcf612e154b0ded95b8ef5016ca032973a0e" Feb 24 10:23:08 crc kubenswrapper[4755]: E0224 10:23:08.049514 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f6b276e397631de66943d17b96fcf612e154b0ded95b8ef5016ca032973a0e\": container with ID starting with 79f6b276e397631de66943d17b96fcf612e154b0ded95b8ef5016ca032973a0e not found: ID does not exist" containerID="79f6b276e397631de66943d17b96fcf612e154b0ded95b8ef5016ca032973a0e" Feb 24 10:23:08 crc kubenswrapper[4755]: I0224 10:23:08.049561 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f6b276e397631de66943d17b96fcf612e154b0ded95b8ef5016ca032973a0e"} err="failed to get container status \"79f6b276e397631de66943d17b96fcf612e154b0ded95b8ef5016ca032973a0e\": rpc error: code = NotFound desc = could not find container \"79f6b276e397631de66943d17b96fcf612e154b0ded95b8ef5016ca032973a0e\": container with ID starting with 79f6b276e397631de66943d17b96fcf612e154b0ded95b8ef5016ca032973a0e not found: ID does not exist" Feb 24 10:23:08 crc kubenswrapper[4755]: I0224 10:23:08.330569 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56590428-885d-4f1e-a9d7-d4066eb72787" path="/var/lib/kubelet/pods/56590428-885d-4f1e-a9d7-d4066eb72787/volumes" Feb 24 10:23:08 crc kubenswrapper[4755]: I0224 10:23:08.858538 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48338: no serving certificate available for the kubelet" Feb 24 10:23:10 crc kubenswrapper[4755]: I0224 10:23:10.700112 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48346: no serving certificate available for the kubelet" Feb 24 10:23:11 crc kubenswrapper[4755]: I0224 10:23:11.913286 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48356: no serving certificate available for the kubelet" Feb 24 10:23:13 crc kubenswrapper[4755]: I0224 10:23:13.742216 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45098: no serving certificate available for the kubelet" Feb 24 10:23:14 crc kubenswrapper[4755]: I0224 10:23:14.974164 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45100: no serving certificate available for the kubelet" Feb 24 10:23:16 crc kubenswrapper[4755]: I0224 10:23:16.787649 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45108: no serving certificate available for the kubelet" Feb 24 10:23:18 crc kubenswrapper[4755]: I0224 10:23:18.033621 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45116: no serving certificate available for the kubelet" Feb 24 10:23:19 crc kubenswrapper[4755]: I0224 10:23:19.317725 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:23:19 crc kubenswrapper[4755]: E0224 10:23:19.318861 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:23:19 crc kubenswrapper[4755]: I0224 10:23:19.832570 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45120: no serving certificate available for the kubelet" Feb 24 10:23:21 crc kubenswrapper[4755]: I0224 10:23:21.082692 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45136: no serving certificate available for the kubelet" Feb 24 10:23:22 crc kubenswrapper[4755]: I0224 10:23:22.885900 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45148: no serving certificate available for the kubelet" Feb 24 10:23:24 crc kubenswrapper[4755]: I0224 10:23:24.142493 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45500: no serving certificate available for the kubelet" Feb 24 10:23:25 crc kubenswrapper[4755]: I0224 10:23:25.926019 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45508: no serving certificate available for the kubelet" Feb 24 10:23:27 crc kubenswrapper[4755]: I0224 10:23:27.196214 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45514: no serving certificate available for the kubelet" Feb 24 10:23:28 crc kubenswrapper[4755]: I0224 10:23:28.975279 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45524: no serving certificate available for the kubelet" Feb 24 10:23:30 crc kubenswrapper[4755]: I0224 10:23:30.241462 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45526: no serving certificate available for the kubelet" Feb 24 10:23:32 crc kubenswrapper[4755]: I0224 10:23:32.030327 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45532: no serving certificate available for the kubelet" Feb 24 10:23:32 crc kubenswrapper[4755]: I0224 10:23:32.316653 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:23:32 crc kubenswrapper[4755]: E0224 10:23:32.317595 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:23:33 crc kubenswrapper[4755]: I0224 10:23:33.293907 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45544: no serving certificate available for the kubelet" Feb 24 10:23:35 crc kubenswrapper[4755]: I0224 10:23:35.074460 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59834: no serving certificate available for the kubelet" Feb 24 10:23:36 crc kubenswrapper[4755]: I0224 10:23:36.345690 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59842: no serving certificate available for the kubelet" Feb 24 10:23:38 crc kubenswrapper[4755]: I0224 10:23:38.131646 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59846: no serving certificate available for the kubelet" Feb 24 10:23:39 crc kubenswrapper[4755]: I0224 10:23:39.393281 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59850: no serving certificate available for the kubelet" Feb 24 10:23:41 crc kubenswrapper[4755]: I0224 10:23:41.192399 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59858: no serving certificate available for the kubelet" Feb 24 10:23:42 crc kubenswrapper[4755]: I0224 10:23:42.443471 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59860: no serving certificate available for the kubelet" Feb 24 10:23:44 crc kubenswrapper[4755]: I0224 10:23:44.232868 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57044: no serving certificate available for the kubelet" Feb 24 10:23:45 crc kubenswrapper[4755]: I0224 10:23:45.494341 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57056: no serving certificate available for the kubelet" Feb 24 10:23:46 crc kubenswrapper[4755]: I0224 10:23:46.322529 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:23:46 crc kubenswrapper[4755]: E0224 10:23:46.323004 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:23:47 crc kubenswrapper[4755]: I0224 10:23:47.279165 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57070: no serving certificate available for the kubelet" Feb 24 10:23:48 crc kubenswrapper[4755]: I0224 10:23:48.553204 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57074: no serving certificate available for the kubelet" Feb 24 10:23:50 crc kubenswrapper[4755]: I0224 10:23:50.315338 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57078: no serving certificate available for the kubelet" Feb 24 10:23:51 crc kubenswrapper[4755]: I0224 10:23:51.603184 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57094: no serving certificate available for the kubelet" Feb 24 10:23:53 crc kubenswrapper[4755]: I0224 10:23:53.359520 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57110: no serving certificate available for the kubelet" Feb 24 10:23:54 crc kubenswrapper[4755]: I0224 10:23:54.655137 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40616: no serving certificate available for the kubelet" Feb 24 10:23:56 crc kubenswrapper[4755]: I0224 10:23:56.416118 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40628: no serving certificate available for the kubelet" Feb 24 10:23:57 crc kubenswrapper[4755]: I0224 10:23:57.713103 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40640: no serving certificate available for the kubelet" Feb 24 10:23:58 crc kubenswrapper[4755]: I0224 10:23:58.402496 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40650: no serving certificate available for the kubelet" Feb 24 10:23:59 crc kubenswrapper[4755]: I0224 10:23:59.478480 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40654: no serving certificate available for the kubelet" Feb 24 10:24:00 crc kubenswrapper[4755]: I0224 10:24:00.317607 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:24:00 crc kubenswrapper[4755]: E0224 10:24:00.317982 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:24:00 crc kubenswrapper[4755]: I0224 10:24:00.773812 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40670: no serving certificate available for the kubelet" Feb 24 10:24:02 crc kubenswrapper[4755]: I0224 10:24:02.518810 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40676: no serving certificate available for the kubelet" Feb 24 10:24:03 crc kubenswrapper[4755]: I0224 10:24:03.816553 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54492: no serving certificate available for the kubelet" Feb 24 10:24:05 crc kubenswrapper[4755]: I0224 10:24:05.573183 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54504: no serving certificate available for the kubelet" Feb 24 10:24:06 crc kubenswrapper[4755]: I0224 10:24:06.870495 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54514: no serving certificate available for the kubelet" Feb 24 10:24:07 crc kubenswrapper[4755]: I0224 10:24:07.009088 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54530: no serving certificate available for the kubelet" Feb 24 10:24:08 crc kubenswrapper[4755]: I0224 10:24:08.591693 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54532: no serving certificate available for the kubelet" Feb 24 10:24:08 crc kubenswrapper[4755]: I0224 10:24:08.628099 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54536: no serving certificate available for the kubelet" Feb 24 10:24:09 crc kubenswrapper[4755]: I0224 10:24:09.920298 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54546: no serving certificate available for the kubelet" Feb 24 10:24:11 crc kubenswrapper[4755]: I0224 10:24:11.685333 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54562: no serving certificate available for the kubelet" Feb 24 10:24:12 crc kubenswrapper[4755]: I0224 10:24:12.970957 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54576: no serving certificate available for the kubelet" Feb 24 10:24:13 crc kubenswrapper[4755]: I0224 10:24:13.337954 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:24:13 crc kubenswrapper[4755]: E0224 10:24:13.338211 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:24:14 crc kubenswrapper[4755]: I0224 10:24:14.747968 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38682: no serving certificate available for the kubelet" Feb 24 10:24:16 crc kubenswrapper[4755]: I0224 10:24:16.022888 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38694: no serving certificate available for the kubelet" Feb 24 10:24:17 crc kubenswrapper[4755]: I0224 10:24:17.798305 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38706: no serving certificate available for the kubelet" Feb 24 10:24:19 crc kubenswrapper[4755]: I0224 10:24:19.066286 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38712: no serving certificate available for the kubelet" Feb 24 10:24:20 crc kubenswrapper[4755]: I0224 10:24:20.835124 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38726: no serving certificate available for the kubelet" Feb 24 10:24:22 crc kubenswrapper[4755]: I0224 10:24:22.116169 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38740: no serving certificate available for the kubelet" Feb 24 10:24:23 crc kubenswrapper[4755]: I0224 10:24:23.897759 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41998: no serving certificate available for the kubelet" Feb 24 10:24:25 crc kubenswrapper[4755]: I0224 10:24:25.181359 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42004: no serving certificate available for the kubelet" Feb 24 10:24:26 crc kubenswrapper[4755]: I0224 10:24:26.953906 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42008: no serving certificate available for the kubelet" Feb 24 10:24:27 crc kubenswrapper[4755]: I0224 10:24:27.316335 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:24:27 crc kubenswrapper[4755]: E0224 10:24:27.316615 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:24:28 crc kubenswrapper[4755]: I0224 10:24:28.228720 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42018: no serving certificate available for the kubelet" Feb 24 10:24:29 crc kubenswrapper[4755]: I0224 10:24:29.999275 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42026: no serving certificate available for the kubelet" Feb 24 10:24:31 crc kubenswrapper[4755]: I0224 10:24:31.297645 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42040: no serving certificate available for the kubelet" Feb 24 10:24:33 crc kubenswrapper[4755]: I0224 10:24:33.048711 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42044: no serving certificate available for the kubelet" Feb 24 10:24:34 crc kubenswrapper[4755]: I0224 10:24:34.343240 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34620: no serving certificate available for the kubelet" Feb 24 10:24:36 crc kubenswrapper[4755]: I0224 10:24:36.168224 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34634: no serving certificate available for the kubelet" Feb 24 10:24:37 crc kubenswrapper[4755]: I0224 10:24:37.397498 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34648: no serving certificate available for the kubelet" Feb 24 10:24:39 crc kubenswrapper[4755]: I0224 10:24:39.225383 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34662: no serving certificate available for the kubelet" Feb 24 10:24:40 crc kubenswrapper[4755]: I0224 10:24:40.444645 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34666: no serving certificate available for the kubelet" Feb 24 10:24:42 crc kubenswrapper[4755]: I0224 10:24:42.283239 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34676: no serving certificate available for the kubelet" Feb 24 10:24:42 crc kubenswrapper[4755]: I0224 10:24:42.316378 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:24:42 crc kubenswrapper[4755]: E0224 10:24:42.316681 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:24:43 crc kubenswrapper[4755]: I0224 10:24:43.523669 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34678: no serving certificate available for the kubelet" Feb 24 10:24:45 crc kubenswrapper[4755]: I0224 10:24:45.337980 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49076: no serving certificate available for the kubelet" Feb 24 10:24:46 crc kubenswrapper[4755]: I0224 10:24:46.582904 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49080: no serving certificate available for the kubelet" Feb 24 10:24:48 crc kubenswrapper[4755]: I0224 10:24:48.400390 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49088: no serving certificate available for the kubelet" Feb 24 10:24:49 crc kubenswrapper[4755]: I0224 10:24:49.640986 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49100: no serving certificate available for the kubelet" Feb 24 10:24:51 crc kubenswrapper[4755]: I0224 10:24:51.458312 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49114: no serving certificate available for the kubelet" Feb 24 10:24:52 crc kubenswrapper[4755]: I0224 10:24:52.700599 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49130: no serving certificate available for the kubelet" Feb 24 10:24:53 crc kubenswrapper[4755]: I0224 10:24:53.317118 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:24:53 crc kubenswrapper[4755]: E0224 10:24:53.317415 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:24:54 crc kubenswrapper[4755]: I0224 10:24:54.504226 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53236: no serving certificate available for the kubelet" Feb 24 10:24:55 crc kubenswrapper[4755]: I0224 10:24:55.746486 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53238: no serving certificate available for the kubelet" Feb 24 10:24:57 crc kubenswrapper[4755]: I0224 10:24:57.550922 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53244: no serving certificate available for the kubelet" Feb 24 10:24:58 crc kubenswrapper[4755]: I0224 10:24:58.651847 4755 scope.go:117] "RemoveContainer" containerID="bc5f236581634855b61d26bd8193024771afa0dbab6aab927b373eeb61ec50b8" Feb 24 10:24:58 crc kubenswrapper[4755]: I0224 10:24:58.790444 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53250: no serving certificate available for the kubelet" Feb 24 10:25:00 crc kubenswrapper[4755]: I0224 10:25:00.600584 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53260: no serving certificate available for the kubelet" Feb 24 10:25:01 crc kubenswrapper[4755]: I0224 10:25:01.840539 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53276: no serving certificate available for the kubelet" Feb 24 10:25:03 crc kubenswrapper[4755]: I0224 10:25:03.639722 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53286: no serving certificate available for the kubelet" Feb 24 10:25:04 crc kubenswrapper[4755]: I0224 10:25:04.900161 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57502: no serving certificate available for the kubelet" Feb 24 10:25:06 crc kubenswrapper[4755]: I0224 10:25:06.691204 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57504: no serving certificate available for the kubelet" Feb 24 10:25:07 crc kubenswrapper[4755]: I0224 10:25:07.316963 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:25:07 crc kubenswrapper[4755]: E0224 10:25:07.317529 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:25:07 crc kubenswrapper[4755]: I0224 10:25:07.957747 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57514: no serving certificate available for the kubelet" Feb 24 10:25:09 crc kubenswrapper[4755]: I0224 10:25:09.740279 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57520: no serving certificate available for the kubelet" Feb 24 10:25:11 crc kubenswrapper[4755]: I0224 10:25:11.014778 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57526: no serving certificate available for the kubelet" Feb 24 10:25:12 crc kubenswrapper[4755]: I0224 10:25:12.804467 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57528: no serving certificate available for the kubelet" Feb 24 10:25:14 crc kubenswrapper[4755]: I0224 10:25:14.069685 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39080: no serving certificate available for the kubelet" Feb 24 10:25:15 crc kubenswrapper[4755]: I0224 10:25:15.848546 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39094: no serving certificate available for the kubelet" Feb 24 10:25:17 crc kubenswrapper[4755]: I0224 10:25:17.112156 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39098: no serving certificate available for the kubelet" Feb 24 10:25:18 crc kubenswrapper[4755]: I0224 10:25:18.941011 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39102: no serving certificate available for the kubelet" Feb 24 10:25:20 crc kubenswrapper[4755]: I0224 10:25:20.165194 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39106: no serving certificate available for the kubelet" Feb 24 10:25:21 crc kubenswrapper[4755]: I0224 10:25:21.317020 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:25:21 crc kubenswrapper[4755]: E0224 10:25:21.317740 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:25:22 crc kubenswrapper[4755]: I0224 10:25:22.012337 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39120: no serving certificate available for the kubelet" Feb 24 10:25:23 crc kubenswrapper[4755]: I0224 10:25:23.228661 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39122: no serving certificate available for the kubelet" Feb 24 10:25:25 crc kubenswrapper[4755]: I0224 10:25:25.048735 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43342: no serving certificate available for the kubelet" Feb 24 10:25:26 crc kubenswrapper[4755]: I0224 10:25:26.279277 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43354: no serving certificate available for the kubelet" Feb 24 10:25:28 crc kubenswrapper[4755]: I0224 10:25:28.107636 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43356: no serving certificate available for the kubelet" Feb 24 10:25:29 crc kubenswrapper[4755]: I0224 10:25:29.337822 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43358: no serving certificate available for the kubelet" Feb 24 10:25:31 crc kubenswrapper[4755]: I0224 10:25:31.162848 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43364: no serving certificate available for the kubelet" Feb 24 10:25:32 crc kubenswrapper[4755]: I0224 10:25:32.317773 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:25:32 crc kubenswrapper[4755]: E0224 10:25:32.318301 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:25:32 crc kubenswrapper[4755]: I0224 10:25:32.398869 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43376: no serving certificate available for the kubelet" Feb 24 10:25:34 crc kubenswrapper[4755]: I0224 10:25:34.211427 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52150: no serving certificate available for the kubelet" Feb 24 10:25:35 crc kubenswrapper[4755]: I0224 10:25:35.475076 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52166: no serving certificate available for the kubelet" Feb 24 10:25:37 crc kubenswrapper[4755]: I0224 10:25:37.260884 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52174: no serving certificate available for the kubelet" Feb 24 10:25:38 crc kubenswrapper[4755]: I0224 10:25:38.545362 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52178: no serving certificate available for the kubelet" Feb 24 10:25:40 crc kubenswrapper[4755]: I0224 10:25:40.312206 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52192: no serving certificate available for the kubelet" Feb 24 10:25:41 crc kubenswrapper[4755]: I0224 10:25:41.605290 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52202: no serving certificate available for the kubelet" Feb 24 10:25:43 crc kubenswrapper[4755]: I0224 10:25:43.364423 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52218: no serving certificate available for the kubelet" Feb 24 10:25:44 crc kubenswrapper[4755]: I0224 10:25:44.316392 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:25:44 crc kubenswrapper[4755]: E0224 10:25:44.317002 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:25:44 crc kubenswrapper[4755]: I0224 10:25:44.666231 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39896: no serving certificate available for the kubelet" Feb 24 10:25:46 crc kubenswrapper[4755]: I0224 10:25:46.411225 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39898: no serving certificate available for the kubelet" Feb 24 10:25:47 crc kubenswrapper[4755]: I0224 10:25:47.716938 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39908: no serving certificate available for the kubelet" Feb 24 10:25:49 crc kubenswrapper[4755]: I0224 10:25:49.460236 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39922: no serving certificate available for the kubelet" Feb 24 10:25:50 crc kubenswrapper[4755]: I0224 10:25:50.764781 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39926: no serving certificate available for the kubelet" Feb 24 10:25:52 crc kubenswrapper[4755]: I0224 10:25:52.498480 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39930: no serving certificate available for the kubelet" Feb 24 10:25:53 crc kubenswrapper[4755]: I0224 10:25:53.807586 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53670: no serving certificate available for the kubelet" Feb 24 10:25:55 crc kubenswrapper[4755]: I0224 10:25:55.560501 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53672: no serving certificate available for the kubelet" Feb 24 10:25:56 crc kubenswrapper[4755]: I0224 10:25:56.846875 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53688: no serving certificate available for the kubelet" Feb 24 10:25:58 crc kubenswrapper[4755]: I0224 10:25:58.316821 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:25:58 crc kubenswrapper[4755]: E0224 10:25:58.317393 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:25:58 crc kubenswrapper[4755]: I0224 10:25:58.621956 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53704: no serving certificate available for the kubelet" Feb 24 10:25:59 crc kubenswrapper[4755]: I0224 10:25:59.897998 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53706: no serving certificate available for the kubelet" Feb 24 10:26:01 crc kubenswrapper[4755]: I0224 10:26:01.656157 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53716: no serving certificate available for the kubelet" Feb 24 10:26:02 crc kubenswrapper[4755]: I0224 10:26:02.955413 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53730: no serving certificate available for the kubelet" Feb 24 10:26:04 crc kubenswrapper[4755]: I0224 10:26:04.694273 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46236: no serving certificate available for the kubelet" Feb 24 10:26:06 crc kubenswrapper[4755]: I0224 10:26:06.012924 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46240: no serving certificate available for the kubelet" Feb 24 10:26:07 crc kubenswrapper[4755]: I0224 10:26:07.751928 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46244: no serving certificate available for the kubelet" Feb 24 10:26:09 crc kubenswrapper[4755]: I0224 10:26:09.056239 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46258: no serving certificate available for the kubelet" Feb 24 10:26:10 crc kubenswrapper[4755]: I0224 10:26:10.317366 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:26:10 crc kubenswrapper[4755]: E0224 10:26:10.317783 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:26:10 crc kubenswrapper[4755]: I0224 10:26:10.786011 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46260: no serving certificate available for the kubelet" Feb 24 10:26:12 crc kubenswrapper[4755]: I0224 10:26:12.115858 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46276: no serving certificate available for the kubelet" Feb 24 10:26:13 crc kubenswrapper[4755]: I0224 10:26:13.838550 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46814: no serving certificate available for the kubelet" Feb 24 10:26:15 crc kubenswrapper[4755]: I0224 10:26:15.165012 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46830: no serving certificate available for the kubelet" Feb 24 10:26:16 crc kubenswrapper[4755]: I0224 10:26:16.881203 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46844: no serving certificate available for the kubelet" Feb 24 10:26:18 crc kubenswrapper[4755]: I0224 10:26:18.209495 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46846: no serving certificate available for the kubelet" Feb 24 10:26:19 crc kubenswrapper[4755]: I0224 10:26:19.274742 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" probeResult="failure" output=< Feb 24 10:26:19 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:26:19 crc kubenswrapper[4755]: > Feb 24 10:26:19 crc kubenswrapper[4755]: I0224 10:26:19.275358 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:26:19 crc kubenswrapper[4755]: I0224 10:26:19.276545 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"bb6268125e049ec7a775938b07551bdc506873d3b7cecf970cc72c40925a0d95"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:26:19 crc kubenswrapper[4755]: I0224 10:26:19.375120 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" containerID="cri-o://bb6268125e049ec7a775938b07551bdc506873d3b7cecf970cc72c40925a0d95" gracePeriod=30 Feb 24 10:26:19 crc kubenswrapper[4755]: I0224 10:26:19.923844 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46848: no serving certificate available for the kubelet" Feb 24 10:26:19 crc kubenswrapper[4755]: I0224 10:26:19.954559 4755 generic.go:334] "Generic (PLEG): container finished" podID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerID="bb6268125e049ec7a775938b07551bdc506873d3b7cecf970cc72c40925a0d95" exitCode=143 Feb 24 10:26:19 crc kubenswrapper[4755]: I0224 10:26:19.954607 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerDied","Data":"bb6268125e049ec7a775938b07551bdc506873d3b7cecf970cc72c40925a0d95"} Feb 24 10:26:19 crc kubenswrapper[4755]: I0224 10:26:19.954637 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerStarted","Data":"1dcc6a0e5a48ba2150e9af6809b5a45f82baa846e848c62c60145a8600dce932"} Feb 24 10:26:19 crc kubenswrapper[4755]: I0224 10:26:19.954657 4755 scope.go:117] "RemoveContainer" containerID="c23b00c172415a5d4404660869d9355bcca364f8424ce1b7961a9b52aedf43de" Feb 24 10:26:21 crc kubenswrapper[4755]: I0224 10:26:21.292740 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46858: no serving certificate available for the kubelet" Feb 24 10:26:22 crc kubenswrapper[4755]: I0224 10:26:22.318813 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:26:22 crc kubenswrapper[4755]: E0224 10:26:22.319024 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:26:22 crc kubenswrapper[4755]: I0224 10:26:22.764050 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" probeResult="failure" output=< Feb 24 10:26:22 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:26:22 crc kubenswrapper[4755]: > Feb 24 10:26:22 crc kubenswrapper[4755]: I0224 10:26:22.764180 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:26:22 crc kubenswrapper[4755]: I0224 10:26:22.765229 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"d9b418c6634bf36064598a0e1cfcb65663cea0ac5c0b551774275cd0ca618921"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:26:22 crc kubenswrapper[4755]: I0224 10:26:22.845469 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" containerID="cri-o://d9b418c6634bf36064598a0e1cfcb65663cea0ac5c0b551774275cd0ca618921" gracePeriod=30 Feb 24 10:26:22 crc kubenswrapper[4755]: I0224 10:26:22.989793 4755 generic.go:334] "Generic (PLEG): container finished" podID="2f320527-691f-48e9-a243-f60bc805da39" containerID="d9b418c6634bf36064598a0e1cfcb65663cea0ac5c0b551774275cd0ca618921" exitCode=143 Feb 24 10:26:22 crc kubenswrapper[4755]: I0224 10:26:22.989851 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerDied","Data":"d9b418c6634bf36064598a0e1cfcb65663cea0ac5c0b551774275cd0ca618921"} Feb 24 10:26:22 crc kubenswrapper[4755]: I0224 10:26:22.989900 4755 scope.go:117] "RemoveContainer" containerID="2e538d3c9647af0ce3e2cda7edd5083e55d8a51afe1081319c126ae17e558431" Feb 24 10:26:22 crc kubenswrapper[4755]: I0224 10:26:22.994014 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46868: no serving certificate available for the kubelet" Feb 24 10:26:24 crc kubenswrapper[4755]: I0224 10:26:24.005655 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerStarted","Data":"1beb9a67cbcfc1185f8eeecd3f6602727d0ca7b2e48e874d65d187bbd5cdc985"} Feb 24 10:26:24 crc kubenswrapper[4755]: I0224 10:26:24.342663 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42634: no serving certificate available for the kubelet" Feb 24 10:26:26 crc kubenswrapper[4755]: I0224 10:26:26.056346 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42636: no serving certificate available for the kubelet" Feb 24 10:26:27 crc kubenswrapper[4755]: I0224 10:26:27.391910 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42652: no serving certificate available for the kubelet" Feb 24 10:26:29 crc kubenswrapper[4755]: I0224 10:26:29.106538 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42662: no serving certificate available for the kubelet" Feb 24 10:26:29 crc kubenswrapper[4755]: I0224 10:26:29.887518 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:26:29 crc kubenswrapper[4755]: I0224 10:26:29.887582 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 10:26:30 crc kubenswrapper[4755]: I0224 10:26:30.448127 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42666: no serving certificate available for the kubelet" Feb 24 10:26:31 crc kubenswrapper[4755]: I0224 10:26:31.289285 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 10:26:31 crc kubenswrapper[4755]: I0224 10:26:31.289602 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:26:32 crc kubenswrapper[4755]: I0224 10:26:32.165614 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42680: no serving certificate available for the kubelet" Feb 24 10:26:33 crc kubenswrapper[4755]: I0224 10:26:33.317274 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:26:33 crc kubenswrapper[4755]: E0224 10:26:33.318211 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:26:33 crc kubenswrapper[4755]: I0224 10:26:33.509990 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42684: no serving certificate available for the kubelet" Feb 24 10:26:35 crc kubenswrapper[4755]: I0224 10:26:35.784005 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47912: no serving certificate available for the kubelet" Feb 24 10:26:36 crc kubenswrapper[4755]: I0224 10:26:36.569514 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47926: no serving certificate available for the kubelet" Feb 24 10:26:38 crc kubenswrapper[4755]: I0224 10:26:38.825321 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47938: no serving certificate available for the kubelet" Feb 24 10:26:39 crc kubenswrapper[4755]: I0224 10:26:39.616946 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47954: no serving certificate available for the kubelet" Feb 24 10:26:41 crc kubenswrapper[4755]: I0224 10:26:41.880270 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47962: no serving certificate available for the kubelet" Feb 24 10:26:41 crc kubenswrapper[4755]: I0224 10:26:41.967741 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5fqdv"] Feb 24 10:26:41 crc kubenswrapper[4755]: E0224 10:26:41.968226 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56590428-885d-4f1e-a9d7-d4066eb72787" containerName="registry-server" Feb 24 10:26:41 crc kubenswrapper[4755]: I0224 10:26:41.968249 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="56590428-885d-4f1e-a9d7-d4066eb72787" containerName="registry-server" Feb 24 10:26:41 crc kubenswrapper[4755]: E0224 10:26:41.968268 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56590428-885d-4f1e-a9d7-d4066eb72787" containerName="extract-content" Feb 24 10:26:41 crc kubenswrapper[4755]: I0224 10:26:41.968278 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="56590428-885d-4f1e-a9d7-d4066eb72787" containerName="extract-content" Feb 24 10:26:41 crc kubenswrapper[4755]: E0224 10:26:41.968309 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56590428-885d-4f1e-a9d7-d4066eb72787" containerName="extract-utilities" Feb 24 10:26:41 crc kubenswrapper[4755]: I0224 10:26:41.968320 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="56590428-885d-4f1e-a9d7-d4066eb72787" containerName="extract-utilities" Feb 24 10:26:41 crc kubenswrapper[4755]: I0224 10:26:41.968568 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="56590428-885d-4f1e-a9d7-d4066eb72787" containerName="registry-server" Feb 24 10:26:41 crc kubenswrapper[4755]: I0224 10:26:41.970221 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:41 crc kubenswrapper[4755]: I0224 10:26:41.998322 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5fqdv"] Feb 24 10:26:42 crc kubenswrapper[4755]: I0224 10:26:42.102210 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-utilities\") pod \"certified-operators-5fqdv\" (UID: \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\") " pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:42 crc kubenswrapper[4755]: I0224 10:26:42.102310 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r48bg\" (UniqueName: \"kubernetes.io/projected/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-kube-api-access-r48bg\") pod \"certified-operators-5fqdv\" (UID: \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\") " pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:42 crc kubenswrapper[4755]: I0224 10:26:42.102446 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-catalog-content\") pod \"certified-operators-5fqdv\" (UID: \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\") " pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:42 crc kubenswrapper[4755]: I0224 10:26:42.204488 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-utilities\") pod \"certified-operators-5fqdv\" (UID: \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\") " pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:42 crc kubenswrapper[4755]: I0224 10:26:42.204621 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r48bg\" (UniqueName: \"kubernetes.io/projected/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-kube-api-access-r48bg\") pod \"certified-operators-5fqdv\" (UID: \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\") " pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:42 crc kubenswrapper[4755]: I0224 10:26:42.204695 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-catalog-content\") pod \"certified-operators-5fqdv\" (UID: \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\") " pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:42 crc kubenswrapper[4755]: I0224 10:26:42.205269 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-catalog-content\") pod \"certified-operators-5fqdv\" (UID: \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\") " pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:42 crc kubenswrapper[4755]: I0224 10:26:42.205540 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-utilities\") pod \"certified-operators-5fqdv\" (UID: \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\") " pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:42 crc kubenswrapper[4755]: I0224 10:26:42.224943 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r48bg\" (UniqueName: \"kubernetes.io/projected/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-kube-api-access-r48bg\") pod \"certified-operators-5fqdv\" (UID: \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\") " pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:42 crc kubenswrapper[4755]: I0224 10:26:42.303805 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:42 crc kubenswrapper[4755]: I0224 10:26:42.675607 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47970: no serving certificate available for the kubelet" Feb 24 10:26:42 crc kubenswrapper[4755]: I0224 10:26:42.811747 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5fqdv"] Feb 24 10:26:42 crc kubenswrapper[4755]: I0224 10:26:42.877221 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fqdv" event={"ID":"b8d4e0f9-ad2b-43df-aa59-e2c91486729f","Type":"ContainerStarted","Data":"c58766f71610e7496639889db9eb2cbee14c98ffb203280127f63da33eb7fe08"} Feb 24 10:26:43 crc kubenswrapper[4755]: I0224 10:26:43.756850 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j5vv9"] Feb 24 10:26:43 crc kubenswrapper[4755]: I0224 10:26:43.759797 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:43 crc kubenswrapper[4755]: I0224 10:26:43.802556 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5vv9"] Feb 24 10:26:43 crc kubenswrapper[4755]: I0224 10:26:43.838477 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dd950a6-d0b2-49c7-85a9-f94815ead25d-catalog-content\") pod \"redhat-marketplace-j5vv9\" (UID: \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\") " pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:43 crc kubenswrapper[4755]: I0224 10:26:43.838677 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dd950a6-d0b2-49c7-85a9-f94815ead25d-utilities\") pod \"redhat-marketplace-j5vv9\" (UID: \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\") " pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:43 crc kubenswrapper[4755]: I0224 10:26:43.838747 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqjgh\" (UniqueName: \"kubernetes.io/projected/1dd950a6-d0b2-49c7-85a9-f94815ead25d-kube-api-access-zqjgh\") pod \"redhat-marketplace-j5vv9\" (UID: \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\") " pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:43 crc kubenswrapper[4755]: I0224 10:26:43.886696 4755 generic.go:334] "Generic (PLEG): container finished" podID="b8d4e0f9-ad2b-43df-aa59-e2c91486729f" containerID="6fbd103e0acd0accd95fe356f43cf0adedea225e834eb52170a66f2a9e597ec7" exitCode=0 Feb 24 10:26:43 crc kubenswrapper[4755]: I0224 10:26:43.886755 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fqdv" event={"ID":"b8d4e0f9-ad2b-43df-aa59-e2c91486729f","Type":"ContainerDied","Data":"6fbd103e0acd0accd95fe356f43cf0adedea225e834eb52170a66f2a9e597ec7"} Feb 24 10:26:43 crc kubenswrapper[4755]: I0224 10:26:43.939790 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqjgh\" (UniqueName: \"kubernetes.io/projected/1dd950a6-d0b2-49c7-85a9-f94815ead25d-kube-api-access-zqjgh\") pod \"redhat-marketplace-j5vv9\" (UID: \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\") " pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:43 crc kubenswrapper[4755]: I0224 10:26:43.939873 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dd950a6-d0b2-49c7-85a9-f94815ead25d-catalog-content\") pod \"redhat-marketplace-j5vv9\" (UID: \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\") " pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:43 crc kubenswrapper[4755]: I0224 10:26:43.939947 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dd950a6-d0b2-49c7-85a9-f94815ead25d-utilities\") pod \"redhat-marketplace-j5vv9\" (UID: \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\") " pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:43 crc kubenswrapper[4755]: I0224 10:26:43.940749 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dd950a6-d0b2-49c7-85a9-f94815ead25d-utilities\") pod \"redhat-marketplace-j5vv9\" (UID: \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\") " pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:43 crc kubenswrapper[4755]: I0224 10:26:43.941247 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dd950a6-d0b2-49c7-85a9-f94815ead25d-catalog-content\") pod \"redhat-marketplace-j5vv9\" (UID: \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\") " pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:43 crc kubenswrapper[4755]: I0224 10:26:43.966636 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqjgh\" (UniqueName: \"kubernetes.io/projected/1dd950a6-d0b2-49c7-85a9-f94815ead25d-kube-api-access-zqjgh\") pod \"redhat-marketplace-j5vv9\" (UID: \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\") " pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:44 crc kubenswrapper[4755]: I0224 10:26:44.089118 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:44 crc kubenswrapper[4755]: I0224 10:26:44.316158 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:26:44 crc kubenswrapper[4755]: E0224 10:26:44.316478 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:26:44 crc kubenswrapper[4755]: I0224 10:26:44.590020 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5vv9"] Feb 24 10:26:44 crc kubenswrapper[4755]: W0224 10:26:44.591459 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dd950a6_d0b2_49c7_85a9_f94815ead25d.slice/crio-a3d0131c188774c2138895b9770383c6f58b79eb7e2f2d7b2bd31a4944ba393e WatchSource:0}: Error finding container a3d0131c188774c2138895b9770383c6f58b79eb7e2f2d7b2bd31a4944ba393e: Status 404 returned error can't find the container with id a3d0131c188774c2138895b9770383c6f58b79eb7e2f2d7b2bd31a4944ba393e Feb 24 10:26:44 crc kubenswrapper[4755]: I0224 10:26:44.899678 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fqdv" event={"ID":"b8d4e0f9-ad2b-43df-aa59-e2c91486729f","Type":"ContainerStarted","Data":"0a382a9588c44b9b328d88453fc1ff481a87a8b13321eae33bf7b4f44930ef4d"} Feb 24 10:26:44 crc kubenswrapper[4755]: I0224 10:26:44.901864 4755 generic.go:334] "Generic (PLEG): container finished" podID="1dd950a6-d0b2-49c7-85a9-f94815ead25d" containerID="01d5f16fdce1ece426e7d1b719a8046104437a5940a2947db889934817630e17" exitCode=0 Feb 24 10:26:44 crc kubenswrapper[4755]: I0224 10:26:44.901972 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5vv9" event={"ID":"1dd950a6-d0b2-49c7-85a9-f94815ead25d","Type":"ContainerDied","Data":"01d5f16fdce1ece426e7d1b719a8046104437a5940a2947db889934817630e17"} Feb 24 10:26:44 crc kubenswrapper[4755]: I0224 10:26:44.902049 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5vv9" event={"ID":"1dd950a6-d0b2-49c7-85a9-f94815ead25d","Type":"ContainerStarted","Data":"a3d0131c188774c2138895b9770383c6f58b79eb7e2f2d7b2bd31a4944ba393e"} Feb 24 10:26:44 crc kubenswrapper[4755]: I0224 10:26:44.948382 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43296: no serving certificate available for the kubelet" Feb 24 10:26:45 crc kubenswrapper[4755]: I0224 10:26:45.717946 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43300: no serving certificate available for the kubelet" Feb 24 10:26:45 crc kubenswrapper[4755]: I0224 10:26:45.912296 4755 generic.go:334] "Generic (PLEG): container finished" podID="b8d4e0f9-ad2b-43df-aa59-e2c91486729f" containerID="0a382a9588c44b9b328d88453fc1ff481a87a8b13321eae33bf7b4f44930ef4d" exitCode=0 Feb 24 10:26:45 crc kubenswrapper[4755]: I0224 10:26:45.912399 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fqdv" event={"ID":"b8d4e0f9-ad2b-43df-aa59-e2c91486729f","Type":"ContainerDied","Data":"0a382a9588c44b9b328d88453fc1ff481a87a8b13321eae33bf7b4f44930ef4d"} Feb 24 10:26:45 crc kubenswrapper[4755]: I0224 10:26:45.917642 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5vv9" event={"ID":"1dd950a6-d0b2-49c7-85a9-f94815ead25d","Type":"ContainerStarted","Data":"60c213598563147a5ccb4ef86178158333eef822eec9199e15730b5eccfea9ea"} Feb 24 10:26:46 crc kubenswrapper[4755]: I0224 10:26:46.929320 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fqdv" event={"ID":"b8d4e0f9-ad2b-43df-aa59-e2c91486729f","Type":"ContainerStarted","Data":"dc3087fdbca8573e43a0a7df6433c5c43b2044367723320bd61f266da722a033"} Feb 24 10:26:46 crc kubenswrapper[4755]: I0224 10:26:46.933867 4755 generic.go:334] "Generic (PLEG): container finished" podID="1dd950a6-d0b2-49c7-85a9-f94815ead25d" containerID="60c213598563147a5ccb4ef86178158333eef822eec9199e15730b5eccfea9ea" exitCode=0 Feb 24 10:26:46 crc kubenswrapper[4755]: I0224 10:26:46.934118 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5vv9" event={"ID":"1dd950a6-d0b2-49c7-85a9-f94815ead25d","Type":"ContainerDied","Data":"60c213598563147a5ccb4ef86178158333eef822eec9199e15730b5eccfea9ea"} Feb 24 10:26:46 crc kubenswrapper[4755]: I0224 10:26:46.946921 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5fqdv" podStartSLOduration=3.559436348 podStartE2EDuration="5.946900735s" podCreationTimestamp="2026-02-24 10:26:41 +0000 UTC" firstStartedPulling="2026-02-24 10:26:43.888684129 +0000 UTC m=+1908.345206692" lastFinishedPulling="2026-02-24 10:26:46.276148516 +0000 UTC m=+1910.732671079" observedRunningTime="2026-02-24 10:26:46.945531623 +0000 UTC m=+1911.402054176" watchObservedRunningTime="2026-02-24 10:26:46.946900735 +0000 UTC m=+1911.403423298" Feb 24 10:26:47 crc kubenswrapper[4755]: I0224 10:26:47.983636 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5vv9" event={"ID":"1dd950a6-d0b2-49c7-85a9-f94815ead25d","Type":"ContainerStarted","Data":"00bd97f89739ced26ceaa5a3716e62e6bf41a863cdcccd13d9b30d0762a8c8f3"} Feb 24 10:26:47 crc kubenswrapper[4755]: I0224 10:26:47.999384 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43310: no serving certificate available for the kubelet" Feb 24 10:26:48 crc kubenswrapper[4755]: I0224 10:26:48.004689 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j5vv9" podStartSLOduration=2.580108572 podStartE2EDuration="5.004672948s" podCreationTimestamp="2026-02-24 10:26:43 +0000 UTC" firstStartedPulling="2026-02-24 10:26:44.903558213 +0000 UTC m=+1909.360080796" lastFinishedPulling="2026-02-24 10:26:47.328122629 +0000 UTC m=+1911.784645172" observedRunningTime="2026-02-24 10:26:48.002492251 +0000 UTC m=+1912.459014814" watchObservedRunningTime="2026-02-24 10:26:48.004672948 +0000 UTC m=+1912.461195491" Feb 24 10:26:48 crc kubenswrapper[4755]: I0224 10:26:48.777956 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43318: no serving certificate available for the kubelet" Feb 24 10:26:51 crc kubenswrapper[4755]: I0224 10:26:51.103973 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43330: no serving certificate available for the kubelet" Feb 24 10:26:51 crc kubenswrapper[4755]: I0224 10:26:51.842944 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43346: no serving certificate available for the kubelet" Feb 24 10:26:52 crc kubenswrapper[4755]: I0224 10:26:52.304377 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:52 crc kubenswrapper[4755]: I0224 10:26:52.305519 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:52 crc kubenswrapper[4755]: I0224 10:26:52.374600 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:53 crc kubenswrapper[4755]: I0224 10:26:53.110974 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:53 crc kubenswrapper[4755]: I0224 10:26:53.174657 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5fqdv"] Feb 24 10:26:54 crc kubenswrapper[4755]: I0224 10:26:54.090047 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:54 crc kubenswrapper[4755]: I0224 10:26:54.090198 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:54 crc kubenswrapper[4755]: I0224 10:26:54.159932 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48544: no serving certificate available for the kubelet" Feb 24 10:26:54 crc kubenswrapper[4755]: I0224 10:26:54.162643 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:54 crc kubenswrapper[4755]: I0224 10:26:54.901241 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48548: no serving certificate available for the kubelet" Feb 24 10:26:55 crc kubenswrapper[4755]: I0224 10:26:55.048394 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5fqdv" podUID="b8d4e0f9-ad2b-43df-aa59-e2c91486729f" containerName="registry-server" containerID="cri-o://dc3087fdbca8573e43a0a7df6433c5c43b2044367723320bd61f266da722a033" gracePeriod=2 Feb 24 10:26:55 crc kubenswrapper[4755]: I0224 10:26:55.116211 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.035008 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5vv9"] Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.060241 4755 generic.go:334] "Generic (PLEG): container finished" podID="b8d4e0f9-ad2b-43df-aa59-e2c91486729f" containerID="dc3087fdbca8573e43a0a7df6433c5c43b2044367723320bd61f266da722a033" exitCode=0 Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.060308 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fqdv" event={"ID":"b8d4e0f9-ad2b-43df-aa59-e2c91486729f","Type":"ContainerDied","Data":"dc3087fdbca8573e43a0a7df6433c5c43b2044367723320bd61f266da722a033"} Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.060363 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fqdv" event={"ID":"b8d4e0f9-ad2b-43df-aa59-e2c91486729f","Type":"ContainerDied","Data":"c58766f71610e7496639889db9eb2cbee14c98ffb203280127f63da33eb7fe08"} Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.060386 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c58766f71610e7496639889db9eb2cbee14c98ffb203280127f63da33eb7fe08" Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.091603 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.155418 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r48bg\" (UniqueName: \"kubernetes.io/projected/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-kube-api-access-r48bg\") pod \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\" (UID: \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\") " Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.155540 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-catalog-content\") pod \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\" (UID: \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\") " Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.155611 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-utilities\") pod \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\" (UID: \"b8d4e0f9-ad2b-43df-aa59-e2c91486729f\") " Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.160060 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-utilities" (OuterVolumeSpecName: "utilities") pod "b8d4e0f9-ad2b-43df-aa59-e2c91486729f" (UID: "b8d4e0f9-ad2b-43df-aa59-e2c91486729f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.165372 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-kube-api-access-r48bg" (OuterVolumeSpecName: "kube-api-access-r48bg") pod "b8d4e0f9-ad2b-43df-aa59-e2c91486729f" (UID: "b8d4e0f9-ad2b-43df-aa59-e2c91486729f"). InnerVolumeSpecName "kube-api-access-r48bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.226896 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8d4e0f9-ad2b-43df-aa59-e2c91486729f" (UID: "b8d4e0f9-ad2b-43df-aa59-e2c91486729f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.257184 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.257237 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.257260 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r48bg\" (UniqueName: \"kubernetes.io/projected/b8d4e0f9-ad2b-43df-aa59-e2c91486729f-kube-api-access-r48bg\") on node \"crc\" DevicePath \"\"" Feb 24 10:26:56 crc kubenswrapper[4755]: I0224 10:26:56.322081 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:26:56 crc kubenswrapper[4755]: E0224 10:26:56.322327 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.070663 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j5vv9" podUID="1dd950a6-d0b2-49c7-85a9-f94815ead25d" containerName="registry-server" containerID="cri-o://00bd97f89739ced26ceaa5a3716e62e6bf41a863cdcccd13d9b30d0762a8c8f3" gracePeriod=2 Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.071294 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fqdv" Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.109627 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5fqdv"] Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.123480 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5fqdv"] Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.224770 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48564: no serving certificate available for the kubelet" Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.506030 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.584797 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqjgh\" (UniqueName: \"kubernetes.io/projected/1dd950a6-d0b2-49c7-85a9-f94815ead25d-kube-api-access-zqjgh\") pod \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\" (UID: \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\") " Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.585298 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dd950a6-d0b2-49c7-85a9-f94815ead25d-catalog-content\") pod \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\" (UID: \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\") " Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.585608 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dd950a6-d0b2-49c7-85a9-f94815ead25d-utilities\") pod \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\" (UID: \"1dd950a6-d0b2-49c7-85a9-f94815ead25d\") " Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.586691 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dd950a6-d0b2-49c7-85a9-f94815ead25d-utilities" (OuterVolumeSpecName: "utilities") pod "1dd950a6-d0b2-49c7-85a9-f94815ead25d" (UID: "1dd950a6-d0b2-49c7-85a9-f94815ead25d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.591769 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd950a6-d0b2-49c7-85a9-f94815ead25d-kube-api-access-zqjgh" (OuterVolumeSpecName: "kube-api-access-zqjgh") pod "1dd950a6-d0b2-49c7-85a9-f94815ead25d" (UID: "1dd950a6-d0b2-49c7-85a9-f94815ead25d"). InnerVolumeSpecName "kube-api-access-zqjgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.606548 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dd950a6-d0b2-49c7-85a9-f94815ead25d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1dd950a6-d0b2-49c7-85a9-f94815ead25d" (UID: "1dd950a6-d0b2-49c7-85a9-f94815ead25d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.687565 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqjgh\" (UniqueName: \"kubernetes.io/projected/1dd950a6-d0b2-49c7-85a9-f94815ead25d-kube-api-access-zqjgh\") on node \"crc\" DevicePath \"\"" Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.687615 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dd950a6-d0b2-49c7-85a9-f94815ead25d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.687636 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dd950a6-d0b2-49c7-85a9-f94815ead25d-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:26:57 crc kubenswrapper[4755]: I0224 10:26:57.946233 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48576: no serving certificate available for the kubelet" Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.081587 4755 generic.go:334] "Generic (PLEG): container finished" podID="1dd950a6-d0b2-49c7-85a9-f94815ead25d" containerID="00bd97f89739ced26ceaa5a3716e62e6bf41a863cdcccd13d9b30d0762a8c8f3" exitCode=0 Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.081635 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5vv9" Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.081643 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5vv9" event={"ID":"1dd950a6-d0b2-49c7-85a9-f94815ead25d","Type":"ContainerDied","Data":"00bd97f89739ced26ceaa5a3716e62e6bf41a863cdcccd13d9b30d0762a8c8f3"} Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.081672 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5vv9" event={"ID":"1dd950a6-d0b2-49c7-85a9-f94815ead25d","Type":"ContainerDied","Data":"a3d0131c188774c2138895b9770383c6f58b79eb7e2f2d7b2bd31a4944ba393e"} Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.081696 4755 scope.go:117] "RemoveContainer" containerID="00bd97f89739ced26ceaa5a3716e62e6bf41a863cdcccd13d9b30d0762a8c8f3" Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.116257 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5vv9"] Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.120144 4755 scope.go:117] "RemoveContainer" containerID="60c213598563147a5ccb4ef86178158333eef822eec9199e15730b5eccfea9ea" Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.125113 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5vv9"] Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.144724 4755 scope.go:117] "RemoveContainer" containerID="01d5f16fdce1ece426e7d1b719a8046104437a5940a2947db889934817630e17" Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.187423 4755 scope.go:117] "RemoveContainer" containerID="00bd97f89739ced26ceaa5a3716e62e6bf41a863cdcccd13d9b30d0762a8c8f3" Feb 24 10:26:58 crc kubenswrapper[4755]: E0224 10:26:58.187892 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00bd97f89739ced26ceaa5a3716e62e6bf41a863cdcccd13d9b30d0762a8c8f3\": container with ID starting with 00bd97f89739ced26ceaa5a3716e62e6bf41a863cdcccd13d9b30d0762a8c8f3 not found: ID does not exist" containerID="00bd97f89739ced26ceaa5a3716e62e6bf41a863cdcccd13d9b30d0762a8c8f3" Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.187945 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00bd97f89739ced26ceaa5a3716e62e6bf41a863cdcccd13d9b30d0762a8c8f3"} err="failed to get container status \"00bd97f89739ced26ceaa5a3716e62e6bf41a863cdcccd13d9b30d0762a8c8f3\": rpc error: code = NotFound desc = could not find container \"00bd97f89739ced26ceaa5a3716e62e6bf41a863cdcccd13d9b30d0762a8c8f3\": container with ID starting with 00bd97f89739ced26ceaa5a3716e62e6bf41a863cdcccd13d9b30d0762a8c8f3 not found: ID does not exist" Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.187983 4755 scope.go:117] "RemoveContainer" containerID="60c213598563147a5ccb4ef86178158333eef822eec9199e15730b5eccfea9ea" Feb 24 10:26:58 crc kubenswrapper[4755]: E0224 10:26:58.188455 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60c213598563147a5ccb4ef86178158333eef822eec9199e15730b5eccfea9ea\": container with ID starting with 60c213598563147a5ccb4ef86178158333eef822eec9199e15730b5eccfea9ea not found: ID does not exist" containerID="60c213598563147a5ccb4ef86178158333eef822eec9199e15730b5eccfea9ea" Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.188671 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c213598563147a5ccb4ef86178158333eef822eec9199e15730b5eccfea9ea"} err="failed to get container status \"60c213598563147a5ccb4ef86178158333eef822eec9199e15730b5eccfea9ea\": rpc error: code = NotFound desc = could not find container \"60c213598563147a5ccb4ef86178158333eef822eec9199e15730b5eccfea9ea\": container with ID starting with 60c213598563147a5ccb4ef86178158333eef822eec9199e15730b5eccfea9ea not found: ID does not exist" Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.188799 4755 scope.go:117] "RemoveContainer" containerID="01d5f16fdce1ece426e7d1b719a8046104437a5940a2947db889934817630e17" Feb 24 10:26:58 crc kubenswrapper[4755]: E0224 10:26:58.189298 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d5f16fdce1ece426e7d1b719a8046104437a5940a2947db889934817630e17\": container with ID starting with 01d5f16fdce1ece426e7d1b719a8046104437a5940a2947db889934817630e17 not found: ID does not exist" containerID="01d5f16fdce1ece426e7d1b719a8046104437a5940a2947db889934817630e17" Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.189340 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d5f16fdce1ece426e7d1b719a8046104437a5940a2947db889934817630e17"} err="failed to get container status \"01d5f16fdce1ece426e7d1b719a8046104437a5940a2947db889934817630e17\": rpc error: code = NotFound desc = could not find container \"01d5f16fdce1ece426e7d1b719a8046104437a5940a2947db889934817630e17\": container with ID starting with 01d5f16fdce1ece426e7d1b719a8046104437a5940a2947db889934817630e17 not found: ID does not exist" Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.338559 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd950a6-d0b2-49c7-85a9-f94815ead25d" path="/var/lib/kubelet/pods/1dd950a6-d0b2-49c7-85a9-f94815ead25d/volumes" Feb 24 10:26:58 crc kubenswrapper[4755]: I0224 10:26:58.340629 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8d4e0f9-ad2b-43df-aa59-e2c91486729f" path="/var/lib/kubelet/pods/b8d4e0f9-ad2b-43df-aa59-e2c91486729f/volumes" Feb 24 10:27:00 crc kubenswrapper[4755]: I0224 10:27:00.281163 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48590: no serving certificate available for the kubelet" Feb 24 10:27:01 crc kubenswrapper[4755]: I0224 10:27:01.004143 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48604: no serving certificate available for the kubelet" Feb 24 10:27:03 crc kubenswrapper[4755]: I0224 10:27:03.334572 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48608: no serving certificate available for the kubelet" Feb 24 10:27:04 crc kubenswrapper[4755]: I0224 10:27:04.049323 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53930: no serving certificate available for the kubelet" Feb 24 10:27:06 crc kubenswrapper[4755]: I0224 10:27:06.390363 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53932: no serving certificate available for the kubelet" Feb 24 10:27:07 crc kubenswrapper[4755]: I0224 10:27:07.114750 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53944: no serving certificate available for the kubelet" Feb 24 10:27:07 crc kubenswrapper[4755]: I0224 10:27:07.316549 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:27:07 crc kubenswrapper[4755]: E0224 10:27:07.316804 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:27:09 crc kubenswrapper[4755]: I0224 10:27:09.439993 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53958: no serving certificate available for the kubelet" Feb 24 10:27:10 crc kubenswrapper[4755]: I0224 10:27:10.177437 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53962: no serving certificate available for the kubelet" Feb 24 10:27:12 crc kubenswrapper[4755]: I0224 10:27:12.500654 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53974: no serving certificate available for the kubelet" Feb 24 10:27:13 crc kubenswrapper[4755]: I0224 10:27:13.240445 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53978: no serving certificate available for the kubelet" Feb 24 10:27:15 crc kubenswrapper[4755]: I0224 10:27:15.566718 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45484: no serving certificate available for the kubelet" Feb 24 10:27:16 crc kubenswrapper[4755]: I0224 10:27:16.307443 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45494: no serving certificate available for the kubelet" Feb 24 10:27:18 crc kubenswrapper[4755]: I0224 10:27:18.615519 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45496: no serving certificate available for the kubelet" Feb 24 10:27:19 crc kubenswrapper[4755]: I0224 10:27:19.316245 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:27:19 crc kubenswrapper[4755]: E0224 10:27:19.316541 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:27:19 crc kubenswrapper[4755]: I0224 10:27:19.361296 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45504: no serving certificate available for the kubelet" Feb 24 10:27:21 crc kubenswrapper[4755]: I0224 10:27:21.674418 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45506: no serving certificate available for the kubelet" Feb 24 10:27:22 crc kubenswrapper[4755]: I0224 10:27:22.412515 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45518: no serving certificate available for the kubelet" Feb 24 10:27:24 crc kubenswrapper[4755]: I0224 10:27:24.730775 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55702: no serving certificate available for the kubelet" Feb 24 10:27:25 crc kubenswrapper[4755]: I0224 10:27:25.467146 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55716: no serving certificate available for the kubelet" Feb 24 10:27:27 crc kubenswrapper[4755]: I0224 10:27:27.794662 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55722: no serving certificate available for the kubelet" Feb 24 10:27:28 crc kubenswrapper[4755]: I0224 10:27:28.526195 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55724: no serving certificate available for the kubelet" Feb 24 10:27:30 crc kubenswrapper[4755]: I0224 10:27:30.317264 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:27:30 crc kubenswrapper[4755]: E0224 10:27:30.318106 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:27:30 crc kubenswrapper[4755]: I0224 10:27:30.851564 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55734: no serving certificate available for the kubelet" Feb 24 10:27:31 crc kubenswrapper[4755]: I0224 10:27:31.599169 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55740: no serving certificate available for the kubelet" Feb 24 10:27:33 crc kubenswrapper[4755]: I0224 10:27:33.906340 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59534: no serving certificate available for the kubelet" Feb 24 10:27:34 crc kubenswrapper[4755]: I0224 10:27:34.651538 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59540: no serving certificate available for the kubelet" Feb 24 10:27:36 crc kubenswrapper[4755]: I0224 10:27:36.965181 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59556: no serving certificate available for the kubelet" Feb 24 10:27:37 crc kubenswrapper[4755]: I0224 10:27:37.709171 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59558: no serving certificate available for the kubelet" Feb 24 10:27:40 crc kubenswrapper[4755]: I0224 10:27:40.003723 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59562: no serving certificate available for the kubelet" Feb 24 10:27:40 crc kubenswrapper[4755]: I0224 10:27:40.765467 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59572: no serving certificate available for the kubelet" Feb 24 10:27:43 crc kubenswrapper[4755]: I0224 10:27:43.053856 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59574: no serving certificate available for the kubelet" Feb 24 10:27:43 crc kubenswrapper[4755]: I0224 10:27:43.317292 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:27:43 crc kubenswrapper[4755]: E0224 10:27:43.317711 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:27:43 crc kubenswrapper[4755]: I0224 10:27:43.819407 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38960: no serving certificate available for the kubelet" Feb 24 10:27:46 crc kubenswrapper[4755]: I0224 10:27:46.118802 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38970: no serving certificate available for the kubelet" Feb 24 10:27:46 crc kubenswrapper[4755]: I0224 10:27:46.871765 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38984: no serving certificate available for the kubelet" Feb 24 10:27:49 crc kubenswrapper[4755]: I0224 10:27:49.154382 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39000: no serving certificate available for the kubelet" Feb 24 10:27:49 crc kubenswrapper[4755]: I0224 10:27:49.915283 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39014: no serving certificate available for the kubelet" Feb 24 10:27:52 crc kubenswrapper[4755]: I0224 10:27:52.198185 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39016: no serving certificate available for the kubelet" Feb 24 10:27:52 crc kubenswrapper[4755]: I0224 10:27:52.971752 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39026: no serving certificate available for the kubelet" Feb 24 10:27:55 crc kubenswrapper[4755]: I0224 10:27:55.258425 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50576: no serving certificate available for the kubelet" Feb 24 10:27:56 crc kubenswrapper[4755]: I0224 10:27:56.027865 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50580: no serving certificate available for the kubelet" Feb 24 10:27:58 crc kubenswrapper[4755]: I0224 10:27:58.311527 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50596: no serving certificate available for the kubelet" Feb 24 10:27:58 crc kubenswrapper[4755]: I0224 10:27:58.316920 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:27:58 crc kubenswrapper[4755]: I0224 10:27:58.691819 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"3ce3dda87a828ebb337745ef70df8dd0ea69f2104d8a6f36ce5c4c74f7a0bf28"} Feb 24 10:27:59 crc kubenswrapper[4755]: I0224 10:27:59.069825 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50600: no serving certificate available for the kubelet" Feb 24 10:28:01 crc kubenswrapper[4755]: I0224 10:28:01.370050 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50610: no serving certificate available for the kubelet" Feb 24 10:28:02 crc kubenswrapper[4755]: I0224 10:28:02.119586 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50612: no serving certificate available for the kubelet" Feb 24 10:28:02 crc kubenswrapper[4755]: E0224 10:28:02.516599 4755 certificate_manager.go:579] "Unhandled Error" err="kubernetes.io/kubelet-serving: certificate request was not signed: timed out waiting for the condition" logger="UnhandledError" Feb 24 10:28:04 crc kubenswrapper[4755]: I0224 10:28:04.427681 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43536: no serving certificate available for the kubelet" Feb 24 10:28:05 crc kubenswrapper[4755]: I0224 10:28:05.178364 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43548: no serving certificate available for the kubelet" Feb 24 10:28:06 crc kubenswrapper[4755]: I0224 10:28:06.797636 4755 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 10:28:06 crc kubenswrapper[4755]: I0224 10:28:06.810726 4755 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 10:28:06 crc kubenswrapper[4755]: I0224 10:28:06.850379 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43554: no serving certificate available for the kubelet" Feb 24 10:28:06 crc kubenswrapper[4755]: I0224 10:28:06.944936 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43566: no serving certificate available for the kubelet" Feb 24 10:28:07 crc kubenswrapper[4755]: I0224 10:28:07.043107 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43576: no serving certificate available for the kubelet" Feb 24 10:28:07 crc kubenswrapper[4755]: I0224 10:28:07.092598 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43586: no serving certificate available for the kubelet" Feb 24 10:28:07 crc kubenswrapper[4755]: I0224 10:28:07.168299 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43592: no serving certificate available for the kubelet" Feb 24 10:28:07 crc kubenswrapper[4755]: I0224 10:28:07.300245 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43604: no serving certificate available for the kubelet" Feb 24 10:28:07 crc kubenswrapper[4755]: I0224 10:28:07.468956 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43610: no serving certificate available for the kubelet" Feb 24 10:28:07 crc kubenswrapper[4755]: I0224 10:28:07.487215 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43612: no serving certificate available for the kubelet" Feb 24 10:28:07 crc kubenswrapper[4755]: I0224 10:28:07.839367 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43624: no serving certificate available for the kubelet" Feb 24 10:28:08 crc kubenswrapper[4755]: I0224 10:28:08.237897 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43640: no serving certificate available for the kubelet" Feb 24 10:28:08 crc kubenswrapper[4755]: I0224 10:28:08.516546 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43642: no serving certificate available for the kubelet" Feb 24 10:28:09 crc kubenswrapper[4755]: I0224 10:28:09.828149 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43648: no serving certificate available for the kubelet" Feb 24 10:28:10 crc kubenswrapper[4755]: I0224 10:28:10.505057 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43662: no serving certificate available for the kubelet" Feb 24 10:28:11 crc kubenswrapper[4755]: I0224 10:28:11.294300 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43674: no serving certificate available for the kubelet" Feb 24 10:28:12 crc kubenswrapper[4755]: I0224 10:28:12.425954 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43678: no serving certificate available for the kubelet" Feb 24 10:28:13 crc kubenswrapper[4755]: I0224 10:28:13.617701 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43694: no serving certificate available for the kubelet" Feb 24 10:28:14 crc kubenswrapper[4755]: I0224 10:28:14.345616 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36762: no serving certificate available for the kubelet" Feb 24 10:28:16 crc kubenswrapper[4755]: I0224 10:28:16.678856 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36778: no serving certificate available for the kubelet" Feb 24 10:28:17 crc kubenswrapper[4755]: I0224 10:28:17.400793 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36790: no serving certificate available for the kubelet" Feb 24 10:28:17 crc kubenswrapper[4755]: I0224 10:28:17.582109 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36794: no serving certificate available for the kubelet" Feb 24 10:28:19 crc kubenswrapper[4755]: I0224 10:28:19.724608 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36800: no serving certificate available for the kubelet" Feb 24 10:28:20 crc kubenswrapper[4755]: I0224 10:28:20.455213 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36808: no serving certificate available for the kubelet" Feb 24 10:28:22 crc kubenswrapper[4755]: I0224 10:28:22.763167 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36820: no serving certificate available for the kubelet" Feb 24 10:28:23 crc kubenswrapper[4755]: I0224 10:28:23.510522 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36826: no serving certificate available for the kubelet" Feb 24 10:28:25 crc kubenswrapper[4755]: I0224 10:28:25.827033 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46026: no serving certificate available for the kubelet" Feb 24 10:28:26 crc kubenswrapper[4755]: I0224 10:28:26.565783 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46032: no serving certificate available for the kubelet" Feb 24 10:28:27 crc kubenswrapper[4755]: I0224 10:28:27.862045 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46040: no serving certificate available for the kubelet" Feb 24 10:28:28 crc kubenswrapper[4755]: I0224 10:28:28.891186 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46052: no serving certificate available for the kubelet" Feb 24 10:28:29 crc kubenswrapper[4755]: I0224 10:28:29.609303 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46058: no serving certificate available for the kubelet" Feb 24 10:28:31 crc kubenswrapper[4755]: I0224 10:28:31.956244 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46064: no serving certificate available for the kubelet" Feb 24 10:28:32 crc kubenswrapper[4755]: I0224 10:28:32.658322 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46070: no serving certificate available for the kubelet" Feb 24 10:28:35 crc kubenswrapper[4755]: I0224 10:28:35.006288 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41050: no serving certificate available for the kubelet" Feb 24 10:28:35 crc kubenswrapper[4755]: I0224 10:28:35.716160 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41056: no serving certificate available for the kubelet" Feb 24 10:28:38 crc kubenswrapper[4755]: I0224 10:28:38.068333 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41070: no serving certificate available for the kubelet" Feb 24 10:28:38 crc kubenswrapper[4755]: I0224 10:28:38.773640 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41078: no serving certificate available for the kubelet" Feb 24 10:28:41 crc kubenswrapper[4755]: I0224 10:28:41.194010 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41092: no serving certificate available for the kubelet" Feb 24 10:28:41 crc kubenswrapper[4755]: I0224 10:28:41.828249 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41100: no serving certificate available for the kubelet" Feb 24 10:28:44 crc kubenswrapper[4755]: I0224 10:28:44.242244 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54478: no serving certificate available for the kubelet" Feb 24 10:28:44 crc kubenswrapper[4755]: I0224 10:28:44.899931 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54484: no serving certificate available for the kubelet" Feb 24 10:28:47 crc kubenswrapper[4755]: I0224 10:28:47.308607 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54488: no serving certificate available for the kubelet" Feb 24 10:28:47 crc kubenswrapper[4755]: I0224 10:28:47.953728 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54494: no serving certificate available for the kubelet" Feb 24 10:28:48 crc kubenswrapper[4755]: I0224 10:28:48.386013 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54498: no serving certificate available for the kubelet" Feb 24 10:28:50 crc kubenswrapper[4755]: I0224 10:28:50.369467 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54510: no serving certificate available for the kubelet" Feb 24 10:28:51 crc kubenswrapper[4755]: I0224 10:28:51.009933 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54522: no serving certificate available for the kubelet" Feb 24 10:28:53 crc kubenswrapper[4755]: I0224 10:28:53.429638 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54536: no serving certificate available for the kubelet" Feb 24 10:28:54 crc kubenswrapper[4755]: I0224 10:28:54.073351 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52172: no serving certificate available for the kubelet" Feb 24 10:28:56 crc kubenswrapper[4755]: I0224 10:28:56.486966 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52182: no serving certificate available for the kubelet" Feb 24 10:28:57 crc kubenswrapper[4755]: I0224 10:28:57.137732 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52196: no serving certificate available for the kubelet" Feb 24 10:28:59 crc kubenswrapper[4755]: I0224 10:28:59.548628 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52210: no serving certificate available for the kubelet" Feb 24 10:29:00 crc kubenswrapper[4755]: I0224 10:29:00.201911 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52214: no serving certificate available for the kubelet" Feb 24 10:29:02 crc kubenswrapper[4755]: I0224 10:29:02.592024 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52230: no serving certificate available for the kubelet" Feb 24 10:29:03 crc kubenswrapper[4755]: I0224 10:29:03.258049 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52234: no serving certificate available for the kubelet" Feb 24 10:29:05 crc kubenswrapper[4755]: I0224 10:29:05.653551 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37710: no serving certificate available for the kubelet" Feb 24 10:29:06 crc kubenswrapper[4755]: I0224 10:29:06.318884 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37712: no serving certificate available for the kubelet" Feb 24 10:29:08 crc kubenswrapper[4755]: I0224 10:29:08.711354 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37720: no serving certificate available for the kubelet" Feb 24 10:29:09 crc kubenswrapper[4755]: I0224 10:29:09.360297 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37724: no serving certificate available for the kubelet" Feb 24 10:29:11 crc kubenswrapper[4755]: I0224 10:29:11.770385 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37738: no serving certificate available for the kubelet" Feb 24 10:29:12 crc kubenswrapper[4755]: I0224 10:29:12.424446 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37752: no serving certificate available for the kubelet" Feb 24 10:29:14 crc kubenswrapper[4755]: I0224 10:29:14.824503 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39520: no serving certificate available for the kubelet" Feb 24 10:29:15 crc kubenswrapper[4755]: I0224 10:29:15.475565 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39534: no serving certificate available for the kubelet" Feb 24 10:29:17 crc kubenswrapper[4755]: I0224 10:29:17.883516 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39544: no serving certificate available for the kubelet" Feb 24 10:29:18 crc kubenswrapper[4755]: I0224 10:29:18.534835 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39548: no serving certificate available for the kubelet" Feb 24 10:29:20 crc kubenswrapper[4755]: I0224 10:29:20.942426 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39562: no serving certificate available for the kubelet" Feb 24 10:29:21 crc kubenswrapper[4755]: I0224 10:29:21.604690 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39574: no serving certificate available for the kubelet" Feb 24 10:29:23 crc kubenswrapper[4755]: I0224 10:29:23.996781 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47538: no serving certificate available for the kubelet" Feb 24 10:29:24 crc kubenswrapper[4755]: I0224 10:29:24.688126 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47554: no serving certificate available for the kubelet" Feb 24 10:29:27 crc kubenswrapper[4755]: I0224 10:29:27.055100 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47566: no serving certificate available for the kubelet" Feb 24 10:29:27 crc kubenswrapper[4755]: I0224 10:29:27.750532 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47570: no serving certificate available for the kubelet" Feb 24 10:29:29 crc kubenswrapper[4755]: I0224 10:29:29.392482 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47586: no serving certificate available for the kubelet" Feb 24 10:29:30 crc kubenswrapper[4755]: I0224 10:29:30.123637 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47602: no serving certificate available for the kubelet" Feb 24 10:29:30 crc kubenswrapper[4755]: I0224 10:29:30.806351 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47608: no serving certificate available for the kubelet" Feb 24 10:29:33 crc kubenswrapper[4755]: I0224 10:29:33.182151 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47616: no serving certificate available for the kubelet" Feb 24 10:29:33 crc kubenswrapper[4755]: I0224 10:29:33.867204 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45816: no serving certificate available for the kubelet" Feb 24 10:29:34 crc kubenswrapper[4755]: I0224 10:29:34.812262 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45830: no serving certificate available for the kubelet" Feb 24 10:29:36 crc kubenswrapper[4755]: I0224 10:29:36.233633 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45844: no serving certificate available for the kubelet" Feb 24 10:29:36 crc kubenswrapper[4755]: I0224 10:29:36.489125 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45850: no serving certificate available for the kubelet" Feb 24 10:29:36 crc kubenswrapper[4755]: I0224 10:29:36.926283 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45852: no serving certificate available for the kubelet" Feb 24 10:29:39 crc kubenswrapper[4755]: I0224 10:29:39.289455 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45860: no serving certificate available for the kubelet" Feb 24 10:29:40 crc kubenswrapper[4755]: I0224 10:29:40.027698 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45876: no serving certificate available for the kubelet" Feb 24 10:29:42 crc kubenswrapper[4755]: I0224 10:29:42.346033 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45882: no serving certificate available for the kubelet" Feb 24 10:29:43 crc kubenswrapper[4755]: I0224 10:29:43.081857 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45896: no serving certificate available for the kubelet" Feb 24 10:29:45 crc kubenswrapper[4755]: I0224 10:29:45.406259 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53898: no serving certificate available for the kubelet" Feb 24 10:29:46 crc kubenswrapper[4755]: I0224 10:29:46.133519 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53900: no serving certificate available for the kubelet" Feb 24 10:29:48 crc kubenswrapper[4755]: I0224 10:29:48.458408 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53904: no serving certificate available for the kubelet" Feb 24 10:29:49 crc kubenswrapper[4755]: I0224 10:29:49.170427 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53916: no serving certificate available for the kubelet" Feb 24 10:29:51 crc kubenswrapper[4755]: I0224 10:29:51.589749 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53918: no serving certificate available for the kubelet" Feb 24 10:29:52 crc kubenswrapper[4755]: I0224 10:29:52.259843 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53920: no serving certificate available for the kubelet" Feb 24 10:29:54 crc kubenswrapper[4755]: I0224 10:29:54.651515 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48650: no serving certificate available for the kubelet" Feb 24 10:29:55 crc kubenswrapper[4755]: I0224 10:29:55.317455 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48660: no serving certificate available for the kubelet" Feb 24 10:29:57 crc kubenswrapper[4755]: I0224 10:29:57.685993 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48670: no serving certificate available for the kubelet" Feb 24 10:29:58 crc kubenswrapper[4755]: I0224 10:29:58.422819 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48680: no serving certificate available for the kubelet" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.167178 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc"] Feb 24 10:30:00 crc kubenswrapper[4755]: E0224 10:30:00.167943 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d4e0f9-ad2b-43df-aa59-e2c91486729f" containerName="extract-utilities" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.167967 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d4e0f9-ad2b-43df-aa59-e2c91486729f" containerName="extract-utilities" Feb 24 10:30:00 crc kubenswrapper[4755]: E0224 10:30:00.167997 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d4e0f9-ad2b-43df-aa59-e2c91486729f" containerName="extract-content" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.168007 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d4e0f9-ad2b-43df-aa59-e2c91486729f" containerName="extract-content" Feb 24 10:30:00 crc kubenswrapper[4755]: E0224 10:30:00.168031 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd950a6-d0b2-49c7-85a9-f94815ead25d" containerName="extract-content" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.168044 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd950a6-d0b2-49c7-85a9-f94815ead25d" containerName="extract-content" Feb 24 10:30:00 crc kubenswrapper[4755]: E0224 10:30:00.168088 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd950a6-d0b2-49c7-85a9-f94815ead25d" containerName="registry-server" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.168100 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd950a6-d0b2-49c7-85a9-f94815ead25d" containerName="registry-server" Feb 24 10:30:00 crc kubenswrapper[4755]: E0224 10:30:00.168127 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd950a6-d0b2-49c7-85a9-f94815ead25d" containerName="extract-utilities" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.168139 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd950a6-d0b2-49c7-85a9-f94815ead25d" containerName="extract-utilities" Feb 24 10:30:00 crc kubenswrapper[4755]: E0224 10:30:00.168163 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d4e0f9-ad2b-43df-aa59-e2c91486729f" containerName="registry-server" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.168173 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d4e0f9-ad2b-43df-aa59-e2c91486729f" containerName="registry-server" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.168406 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8d4e0f9-ad2b-43df-aa59-e2c91486729f" containerName="registry-server" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.168433 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd950a6-d0b2-49c7-85a9-f94815ead25d" containerName="registry-server" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.169125 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.172480 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.174517 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.186964 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc"] Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.260361 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1da4ebf-91f7-4419-879c-692efc476e8d-config-volume\") pod \"collect-profiles-29532150-t6lsc\" (UID: \"f1da4ebf-91f7-4419-879c-692efc476e8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.260605 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1da4ebf-91f7-4419-879c-692efc476e8d-secret-volume\") pod \"collect-profiles-29532150-t6lsc\" (UID: \"f1da4ebf-91f7-4419-879c-692efc476e8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.260636 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjb6b\" (UniqueName: \"kubernetes.io/projected/f1da4ebf-91f7-4419-879c-692efc476e8d-kube-api-access-wjb6b\") pod \"collect-profiles-29532150-t6lsc\" (UID: \"f1da4ebf-91f7-4419-879c-692efc476e8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.362560 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1da4ebf-91f7-4419-879c-692efc476e8d-config-volume\") pod \"collect-profiles-29532150-t6lsc\" (UID: \"f1da4ebf-91f7-4419-879c-692efc476e8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.362734 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1da4ebf-91f7-4419-879c-692efc476e8d-secret-volume\") pod \"collect-profiles-29532150-t6lsc\" (UID: \"f1da4ebf-91f7-4419-879c-692efc476e8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.362768 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjb6b\" (UniqueName: \"kubernetes.io/projected/f1da4ebf-91f7-4419-879c-692efc476e8d-kube-api-access-wjb6b\") pod \"collect-profiles-29532150-t6lsc\" (UID: \"f1da4ebf-91f7-4419-879c-692efc476e8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.365803 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1da4ebf-91f7-4419-879c-692efc476e8d-config-volume\") pod \"collect-profiles-29532150-t6lsc\" (UID: \"f1da4ebf-91f7-4419-879c-692efc476e8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.375817 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1da4ebf-91f7-4419-879c-692efc476e8d-secret-volume\") pod \"collect-profiles-29532150-t6lsc\" (UID: \"f1da4ebf-91f7-4419-879c-692efc476e8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.383931 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjb6b\" (UniqueName: \"kubernetes.io/projected/f1da4ebf-91f7-4419-879c-692efc476e8d-kube-api-access-wjb6b\") pod \"collect-profiles-29532150-t6lsc\" (UID: \"f1da4ebf-91f7-4419-879c-692efc476e8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" Feb 24 10:30:00 crc kubenswrapper[4755]: I0224 10:30:00.490715 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" Feb 24 10:30:01 crc kubenswrapper[4755]: I0224 10:30:00.722548 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48684: no serving certificate available for the kubelet" Feb 24 10:30:01 crc kubenswrapper[4755]: I0224 10:30:01.463127 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48690: no serving certificate available for the kubelet" Feb 24 10:30:01 crc kubenswrapper[4755]: I0224 10:30:01.579580 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc"] Feb 24 10:30:01 crc kubenswrapper[4755]: I0224 10:30:01.930277 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" event={"ID":"f1da4ebf-91f7-4419-879c-692efc476e8d","Type":"ContainerStarted","Data":"1f55fa66fd641ca207510b0518ca4b8a501a1f1ea7a0deb51639eb28aa98ea88"} Feb 24 10:30:01 crc kubenswrapper[4755]: I0224 10:30:01.930331 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" event={"ID":"f1da4ebf-91f7-4419-879c-692efc476e8d","Type":"ContainerStarted","Data":"53f93f6071ab97b963fab6ce2be815b7a4c85f376efdf8adeffcd58679dd91d8"} Feb 24 10:30:01 crc kubenswrapper[4755]: I0224 10:30:01.951579 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" podStartSLOduration=1.951562911 podStartE2EDuration="1.951562911s" podCreationTimestamp="2026-02-24 10:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 10:30:01.946052142 +0000 UTC m=+2106.402574685" watchObservedRunningTime="2026-02-24 10:30:01.951562911 +0000 UTC m=+2106.408085454" Feb 24 10:30:02 crc kubenswrapper[4755]: I0224 10:30:02.940551 4755 generic.go:334] "Generic (PLEG): container finished" podID="f1da4ebf-91f7-4419-879c-692efc476e8d" containerID="1f55fa66fd641ca207510b0518ca4b8a501a1f1ea7a0deb51639eb28aa98ea88" exitCode=0 Feb 24 10:30:02 crc kubenswrapper[4755]: I0224 10:30:02.940633 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" event={"ID":"f1da4ebf-91f7-4419-879c-692efc476e8d","Type":"ContainerDied","Data":"1f55fa66fd641ca207510b0518ca4b8a501a1f1ea7a0deb51639eb28aa98ea88"} Feb 24 10:30:03 crc kubenswrapper[4755]: I0224 10:30:03.786508 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56806: no serving certificate available for the kubelet" Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.350688 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.444263 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1da4ebf-91f7-4419-879c-692efc476e8d-config-volume\") pod \"f1da4ebf-91f7-4419-879c-692efc476e8d\" (UID: \"f1da4ebf-91f7-4419-879c-692efc476e8d\") " Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.444353 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjb6b\" (UniqueName: \"kubernetes.io/projected/f1da4ebf-91f7-4419-879c-692efc476e8d-kube-api-access-wjb6b\") pod \"f1da4ebf-91f7-4419-879c-692efc476e8d\" (UID: \"f1da4ebf-91f7-4419-879c-692efc476e8d\") " Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.444407 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1da4ebf-91f7-4419-879c-692efc476e8d-secret-volume\") pod \"f1da4ebf-91f7-4419-879c-692efc476e8d\" (UID: \"f1da4ebf-91f7-4419-879c-692efc476e8d\") " Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.445175 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1da4ebf-91f7-4419-879c-692efc476e8d-config-volume" (OuterVolumeSpecName: "config-volume") pod "f1da4ebf-91f7-4419-879c-692efc476e8d" (UID: "f1da4ebf-91f7-4419-879c-692efc476e8d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.449805 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1da4ebf-91f7-4419-879c-692efc476e8d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f1da4ebf-91f7-4419-879c-692efc476e8d" (UID: "f1da4ebf-91f7-4419-879c-692efc476e8d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.451385 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1da4ebf-91f7-4419-879c-692efc476e8d-kube-api-access-wjb6b" (OuterVolumeSpecName: "kube-api-access-wjb6b") pod "f1da4ebf-91f7-4419-879c-692efc476e8d" (UID: "f1da4ebf-91f7-4419-879c-692efc476e8d"). InnerVolumeSpecName "kube-api-access-wjb6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.505154 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56818: no serving certificate available for the kubelet" Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.546639 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1da4ebf-91f7-4419-879c-692efc476e8d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.546668 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjb6b\" (UniqueName: \"kubernetes.io/projected/f1da4ebf-91f7-4419-879c-692efc476e8d-kube-api-access-wjb6b\") on node \"crc\" DevicePath \"\"" Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.546677 4755 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1da4ebf-91f7-4419-879c-692efc476e8d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.685635 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449"] Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.698932 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532105-sr449"] Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.963248 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" event={"ID":"f1da4ebf-91f7-4419-879c-692efc476e8d","Type":"ContainerDied","Data":"53f93f6071ab97b963fab6ce2be815b7a4c85f376efdf8adeffcd58679dd91d8"} Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.963310 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53f93f6071ab97b963fab6ce2be815b7a4c85f376efdf8adeffcd58679dd91d8" Feb 24 10:30:04 crc kubenswrapper[4755]: I0224 10:30:04.963394 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532150-t6lsc" Feb 24 10:30:06 crc kubenswrapper[4755]: I0224 10:30:06.323904 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="279eff18-be0a-4f07-81fc-69fef8faac6c" path="/var/lib/kubelet/pods/279eff18-be0a-4f07-81fc-69fef8faac6c/volumes" Feb 24 10:30:06 crc kubenswrapper[4755]: I0224 10:30:06.850446 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56834: no serving certificate available for the kubelet" Feb 24 10:30:07 crc kubenswrapper[4755]: I0224 10:30:07.550577 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56842: no serving certificate available for the kubelet" Feb 24 10:30:09 crc kubenswrapper[4755]: I0224 10:30:09.898397 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56852: no serving certificate available for the kubelet" Feb 24 10:30:10 crc kubenswrapper[4755]: I0224 10:30:10.585706 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56860: no serving certificate available for the kubelet" Feb 24 10:30:12 crc kubenswrapper[4755]: I0224 10:30:12.945185 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56868: no serving certificate available for the kubelet" Feb 24 10:30:13 crc kubenswrapper[4755]: I0224 10:30:13.644111 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56880: no serving certificate available for the kubelet" Feb 24 10:30:16 crc kubenswrapper[4755]: I0224 10:30:16.000573 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58548: no serving certificate available for the kubelet" Feb 24 10:30:16 crc kubenswrapper[4755]: I0224 10:30:16.700566 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58550: no serving certificate available for the kubelet" Feb 24 10:30:19 crc kubenswrapper[4755]: I0224 10:30:19.073938 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58564: no serving certificate available for the kubelet" Feb 24 10:30:19 crc kubenswrapper[4755]: I0224 10:30:19.756818 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58570: no serving certificate available for the kubelet" Feb 24 10:30:21 crc kubenswrapper[4755]: I0224 10:30:21.695631 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:30:21 crc kubenswrapper[4755]: I0224 10:30:21.696169 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:30:22 crc kubenswrapper[4755]: I0224 10:30:22.133316 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58582: no serving certificate available for the kubelet" Feb 24 10:30:22 crc kubenswrapper[4755]: I0224 10:30:22.814524 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58594: no serving certificate available for the kubelet" Feb 24 10:30:25 crc kubenswrapper[4755]: I0224 10:30:25.167076 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43998: no serving certificate available for the kubelet" Feb 24 10:30:25 crc kubenswrapper[4755]: I0224 10:30:25.869508 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44010: no serving certificate available for the kubelet" Feb 24 10:30:28 crc kubenswrapper[4755]: I0224 10:30:28.228938 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44018: no serving certificate available for the kubelet" Feb 24 10:30:28 crc kubenswrapper[4755]: I0224 10:30:28.912401 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44028: no serving certificate available for the kubelet" Feb 24 10:30:29 crc kubenswrapper[4755]: I0224 10:30:29.590210 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" probeResult="failure" output=< Feb 24 10:30:29 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:30:29 crc kubenswrapper[4755]: > Feb 24 10:30:29 crc kubenswrapper[4755]: I0224 10:30:29.590344 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:30:29 crc kubenswrapper[4755]: I0224 10:30:29.591091 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"1dcc6a0e5a48ba2150e9af6809b5a45f82baa846e848c62c60145a8600dce932"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:30:29 crc kubenswrapper[4755]: I0224 10:30:29.668935 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" containerID="cri-o://1dcc6a0e5a48ba2150e9af6809b5a45f82baa846e848c62c60145a8600dce932" gracePeriod=30 Feb 24 10:30:30 crc kubenswrapper[4755]: I0224 10:30:30.189025 4755 generic.go:334] "Generic (PLEG): container finished" podID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerID="1dcc6a0e5a48ba2150e9af6809b5a45f82baa846e848c62c60145a8600dce932" exitCode=143 Feb 24 10:30:30 crc kubenswrapper[4755]: I0224 10:30:30.189220 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerDied","Data":"1dcc6a0e5a48ba2150e9af6809b5a45f82baa846e848c62c60145a8600dce932"} Feb 24 10:30:30 crc kubenswrapper[4755]: I0224 10:30:30.189395 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerStarted","Data":"129999cf2c6f5d78b8ad9cb4b1c99cac169f2012398f47f0cbba1aba19702257"} Feb 24 10:30:30 crc kubenswrapper[4755]: I0224 10:30:30.189417 4755 scope.go:117] "RemoveContainer" containerID="bb6268125e049ec7a775938b07551bdc506873d3b7cecf970cc72c40925a0d95" Feb 24 10:30:31 crc kubenswrapper[4755]: I0224 10:30:31.263711 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44030: no serving certificate available for the kubelet" Feb 24 10:30:31 crc kubenswrapper[4755]: I0224 10:30:31.969972 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44038: no serving certificate available for the kubelet" Feb 24 10:30:32 crc kubenswrapper[4755]: I0224 10:30:32.851814 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" probeResult="failure" output=< Feb 24 10:30:32 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:30:32 crc kubenswrapper[4755]: > Feb 24 10:30:32 crc kubenswrapper[4755]: I0224 10:30:32.852244 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:30:32 crc kubenswrapper[4755]: I0224 10:30:32.853305 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"1beb9a67cbcfc1185f8eeecd3f6602727d0ca7b2e48e874d65d187bbd5cdc985"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:30:32 crc kubenswrapper[4755]: I0224 10:30:32.934823 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" containerID="cri-o://1beb9a67cbcfc1185f8eeecd3f6602727d0ca7b2e48e874d65d187bbd5cdc985" gracePeriod=30 Feb 24 10:30:33 crc kubenswrapper[4755]: I0224 10:30:33.224562 4755 generic.go:334] "Generic (PLEG): container finished" podID="2f320527-691f-48e9-a243-f60bc805da39" containerID="1beb9a67cbcfc1185f8eeecd3f6602727d0ca7b2e48e874d65d187bbd5cdc985" exitCode=143 Feb 24 10:30:33 crc kubenswrapper[4755]: I0224 10:30:33.224642 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerDied","Data":"1beb9a67cbcfc1185f8eeecd3f6602727d0ca7b2e48e874d65d187bbd5cdc985"} Feb 24 10:30:33 crc kubenswrapper[4755]: I0224 10:30:33.224684 4755 scope.go:117] "RemoveContainer" containerID="d9b418c6634bf36064598a0e1cfcb65663cea0ac5c0b551774275cd0ca618921" Feb 24 10:30:34 crc kubenswrapper[4755]: I0224 10:30:34.238975 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerStarted","Data":"a8d889c52bcf05c875239ad26fdbe4df1a47c88c54129a3d9d3a9cf0739230fe"} Feb 24 10:30:34 crc kubenswrapper[4755]: I0224 10:30:34.350139 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50052: no serving certificate available for the kubelet" Feb 24 10:30:35 crc kubenswrapper[4755]: I0224 10:30:35.028339 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50054: no serving certificate available for the kubelet" Feb 24 10:30:37 crc kubenswrapper[4755]: I0224 10:30:37.415337 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50066: no serving certificate available for the kubelet" Feb 24 10:30:38 crc kubenswrapper[4755]: I0224 10:30:38.132230 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50074: no serving certificate available for the kubelet" Feb 24 10:30:39 crc kubenswrapper[4755]: I0224 10:30:39.887755 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:30:39 crc kubenswrapper[4755]: I0224 10:30:39.888151 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 10:30:40 crc kubenswrapper[4755]: I0224 10:30:40.479876 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50084: no serving certificate available for the kubelet" Feb 24 10:30:41 crc kubenswrapper[4755]: I0224 10:30:41.195221 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50096: no serving certificate available for the kubelet" Feb 24 10:30:41 crc kubenswrapper[4755]: I0224 10:30:41.289204 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:30:41 crc kubenswrapper[4755]: I0224 10:30:41.289322 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 10:30:43 crc kubenswrapper[4755]: I0224 10:30:43.540412 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50100: no serving certificate available for the kubelet" Feb 24 10:30:44 crc kubenswrapper[4755]: I0224 10:30:44.236252 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36832: no serving certificate available for the kubelet" Feb 24 10:30:46 crc kubenswrapper[4755]: I0224 10:30:46.582727 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36848: no serving certificate available for the kubelet" Feb 24 10:30:47 crc kubenswrapper[4755]: I0224 10:30:47.309825 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36850: no serving certificate available for the kubelet" Feb 24 10:30:49 crc kubenswrapper[4755]: I0224 10:30:49.631600 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36866: no serving certificate available for the kubelet" Feb 24 10:30:50 crc kubenswrapper[4755]: I0224 10:30:50.363844 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36880: no serving certificate available for the kubelet" Feb 24 10:30:51 crc kubenswrapper[4755]: I0224 10:30:51.344484 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36884: no serving certificate available for the kubelet" Feb 24 10:30:51 crc kubenswrapper[4755]: I0224 10:30:51.694839 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:30:51 crc kubenswrapper[4755]: I0224 10:30:51.694932 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:30:52 crc kubenswrapper[4755]: I0224 10:30:52.681917 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36886: no serving certificate available for the kubelet" Feb 24 10:30:53 crc kubenswrapper[4755]: I0224 10:30:53.416429 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36894: no serving certificate available for the kubelet" Feb 24 10:30:55 crc kubenswrapper[4755]: I0224 10:30:55.735834 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52580: no serving certificate available for the kubelet" Feb 24 10:30:56 crc kubenswrapper[4755]: I0224 10:30:56.464733 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52584: no serving certificate available for the kubelet" Feb 24 10:30:58 crc kubenswrapper[4755]: I0224 10:30:58.778228 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52592: no serving certificate available for the kubelet" Feb 24 10:30:58 crc kubenswrapper[4755]: I0224 10:30:58.869109 4755 scope.go:117] "RemoveContainer" containerID="79564065fbe3403c26cf582934dc2e1bcb7d67b6f7c12c00cba81f1397848543" Feb 24 10:30:59 crc kubenswrapper[4755]: I0224 10:30:59.526799 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52606: no serving certificate available for the kubelet" Feb 24 10:31:01 crc kubenswrapper[4755]: I0224 10:31:01.829016 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52608: no serving certificate available for the kubelet" Feb 24 10:31:02 crc kubenswrapper[4755]: I0224 10:31:02.594482 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52622: no serving certificate available for the kubelet" Feb 24 10:31:04 crc kubenswrapper[4755]: I0224 10:31:04.888921 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34420: no serving certificate available for the kubelet" Feb 24 10:31:05 crc kubenswrapper[4755]: I0224 10:31:05.652235 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34436: no serving certificate available for the kubelet" Feb 24 10:31:07 crc kubenswrapper[4755]: I0224 10:31:07.939191 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34450: no serving certificate available for the kubelet" Feb 24 10:31:08 crc kubenswrapper[4755]: I0224 10:31:08.713452 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34452: no serving certificate available for the kubelet" Feb 24 10:31:10 crc kubenswrapper[4755]: I0224 10:31:10.998059 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34458: no serving certificate available for the kubelet" Feb 24 10:31:11 crc kubenswrapper[4755]: I0224 10:31:11.764100 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34468: no serving certificate available for the kubelet" Feb 24 10:31:14 crc kubenswrapper[4755]: I0224 10:31:14.053863 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60694: no serving certificate available for the kubelet" Feb 24 10:31:14 crc kubenswrapper[4755]: I0224 10:31:14.822382 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60710: no serving certificate available for the kubelet" Feb 24 10:31:17 crc kubenswrapper[4755]: I0224 10:31:17.110248 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60716: no serving certificate available for the kubelet" Feb 24 10:31:17 crc kubenswrapper[4755]: I0224 10:31:17.861726 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60726: no serving certificate available for the kubelet" Feb 24 10:31:20 crc kubenswrapper[4755]: I0224 10:31:20.169385 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60736: no serving certificate available for the kubelet" Feb 24 10:31:20 crc kubenswrapper[4755]: I0224 10:31:20.923972 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60748: no serving certificate available for the kubelet" Feb 24 10:31:21 crc kubenswrapper[4755]: I0224 10:31:21.695093 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:31:21 crc kubenswrapper[4755]: I0224 10:31:21.695197 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:31:21 crc kubenswrapper[4755]: I0224 10:31:21.695264 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 10:31:21 crc kubenswrapper[4755]: I0224 10:31:21.696174 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ce3dda87a828ebb337745ef70df8dd0ea69f2104d8a6f36ce5c4c74f7a0bf28"} pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 10:31:21 crc kubenswrapper[4755]: I0224 10:31:21.696315 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" containerID="cri-o://3ce3dda87a828ebb337745ef70df8dd0ea69f2104d8a6f36ce5c4c74f7a0bf28" gracePeriod=600 Feb 24 10:31:22 crc kubenswrapper[4755]: I0224 10:31:22.775874 4755 generic.go:334] "Generic (PLEG): container finished" podID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerID="3ce3dda87a828ebb337745ef70df8dd0ea69f2104d8a6f36ce5c4c74f7a0bf28" exitCode=0 Feb 24 10:31:22 crc kubenswrapper[4755]: I0224 10:31:22.776577 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerDied","Data":"3ce3dda87a828ebb337745ef70df8dd0ea69f2104d8a6f36ce5c4c74f7a0bf28"} Feb 24 10:31:22 crc kubenswrapper[4755]: I0224 10:31:22.776623 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed"} Feb 24 10:31:22 crc kubenswrapper[4755]: I0224 10:31:22.776654 4755 scope.go:117] "RemoveContainer" containerID="70f52f913b8ef419534514c7f3a63695fd28c45943e3606bf424a5d9c1b85733" Feb 24 10:31:23 crc kubenswrapper[4755]: I0224 10:31:23.220591 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60750: no serving certificate available for the kubelet" Feb 24 10:31:23 crc kubenswrapper[4755]: I0224 10:31:23.970204 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49370: no serving certificate available for the kubelet" Feb 24 10:31:26 crc kubenswrapper[4755]: I0224 10:31:26.277687 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49382: no serving certificate available for the kubelet" Feb 24 10:31:27 crc kubenswrapper[4755]: I0224 10:31:27.031116 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49398: no serving certificate available for the kubelet" Feb 24 10:31:29 crc kubenswrapper[4755]: I0224 10:31:29.339307 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49408: no serving certificate available for the kubelet" Feb 24 10:31:30 crc kubenswrapper[4755]: I0224 10:31:30.083955 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49412: no serving certificate available for the kubelet" Feb 24 10:31:32 crc kubenswrapper[4755]: I0224 10:31:32.390038 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49420: no serving certificate available for the kubelet" Feb 24 10:31:33 crc kubenswrapper[4755]: I0224 10:31:33.142815 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49424: no serving certificate available for the kubelet" Feb 24 10:31:35 crc kubenswrapper[4755]: I0224 10:31:35.451238 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33450: no serving certificate available for the kubelet" Feb 24 10:31:36 crc kubenswrapper[4755]: I0224 10:31:36.202730 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33464: no serving certificate available for the kubelet" Feb 24 10:31:38 crc kubenswrapper[4755]: I0224 10:31:38.504502 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33478: no serving certificate available for the kubelet" Feb 24 10:31:39 crc kubenswrapper[4755]: I0224 10:31:39.256378 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33494: no serving certificate available for the kubelet" Feb 24 10:31:41 crc kubenswrapper[4755]: I0224 10:31:41.566509 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33498: no serving certificate available for the kubelet" Feb 24 10:31:42 crc kubenswrapper[4755]: I0224 10:31:42.298423 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33508: no serving certificate available for the kubelet" Feb 24 10:31:44 crc kubenswrapper[4755]: I0224 10:31:44.620634 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34470: no serving certificate available for the kubelet" Feb 24 10:31:45 crc kubenswrapper[4755]: I0224 10:31:45.357419 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34476: no serving certificate available for the kubelet" Feb 24 10:31:47 crc kubenswrapper[4755]: I0224 10:31:47.716966 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34492: no serving certificate available for the kubelet" Feb 24 10:31:48 crc kubenswrapper[4755]: I0224 10:31:48.413162 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34502: no serving certificate available for the kubelet" Feb 24 10:31:50 crc kubenswrapper[4755]: I0224 10:31:50.771040 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34512: no serving certificate available for the kubelet" Feb 24 10:31:51 crc kubenswrapper[4755]: I0224 10:31:51.467105 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34528: no serving certificate available for the kubelet" Feb 24 10:31:53 crc kubenswrapper[4755]: I0224 10:31:53.827552 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60730: no serving certificate available for the kubelet" Feb 24 10:31:54 crc kubenswrapper[4755]: I0224 10:31:54.524299 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60738: no serving certificate available for the kubelet" Feb 24 10:31:56 crc kubenswrapper[4755]: I0224 10:31:56.884413 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60740: no serving certificate available for the kubelet" Feb 24 10:31:57 crc kubenswrapper[4755]: I0224 10:31:57.578555 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60752: no serving certificate available for the kubelet" Feb 24 10:31:59 crc kubenswrapper[4755]: I0224 10:31:59.934511 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60754: no serving certificate available for the kubelet" Feb 24 10:32:00 crc kubenswrapper[4755]: I0224 10:32:00.627620 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60764: no serving certificate available for the kubelet" Feb 24 10:32:02 crc kubenswrapper[4755]: I0224 10:32:02.990177 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60776: no serving certificate available for the kubelet" Feb 24 10:32:03 crc kubenswrapper[4755]: I0224 10:32:03.668298 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60786: no serving certificate available for the kubelet" Feb 24 10:32:06 crc kubenswrapper[4755]: I0224 10:32:06.051707 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51894: no serving certificate available for the kubelet" Feb 24 10:32:06 crc kubenswrapper[4755]: I0224 10:32:06.752990 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51900: no serving certificate available for the kubelet" Feb 24 10:32:09 crc kubenswrapper[4755]: I0224 10:32:09.092532 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51904: no serving certificate available for the kubelet" Feb 24 10:32:09 crc kubenswrapper[4755]: I0224 10:32:09.818231 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51916: no serving certificate available for the kubelet" Feb 24 10:32:12 crc kubenswrapper[4755]: I0224 10:32:12.152387 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51924: no serving certificate available for the kubelet" Feb 24 10:32:12 crc kubenswrapper[4755]: I0224 10:32:12.889631 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51932: no serving certificate available for the kubelet" Feb 24 10:32:15 crc kubenswrapper[4755]: I0224 10:32:15.215299 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57496: no serving certificate available for the kubelet" Feb 24 10:32:15 crc kubenswrapper[4755]: I0224 10:32:15.942920 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57512: no serving certificate available for the kubelet" Feb 24 10:32:18 crc kubenswrapper[4755]: I0224 10:32:18.265839 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57526: no serving certificate available for the kubelet" Feb 24 10:32:18 crc kubenswrapper[4755]: I0224 10:32:18.982605 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57536: no serving certificate available for the kubelet" Feb 24 10:32:21 crc kubenswrapper[4755]: I0224 10:32:21.320165 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57540: no serving certificate available for the kubelet" Feb 24 10:32:22 crc kubenswrapper[4755]: I0224 10:32:22.025481 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57542: no serving certificate available for the kubelet" Feb 24 10:32:24 crc kubenswrapper[4755]: I0224 10:32:24.383677 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52188: no serving certificate available for the kubelet" Feb 24 10:32:25 crc kubenswrapper[4755]: I0224 10:32:25.071523 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52190: no serving certificate available for the kubelet" Feb 24 10:32:27 crc kubenswrapper[4755]: I0224 10:32:27.450473 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52196: no serving certificate available for the kubelet" Feb 24 10:32:28 crc kubenswrapper[4755]: I0224 10:32:28.133207 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52198: no serving certificate available for the kubelet" Feb 24 10:32:30 crc kubenswrapper[4755]: I0224 10:32:30.504866 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52210: no serving certificate available for the kubelet" Feb 24 10:32:31 crc kubenswrapper[4755]: I0224 10:32:31.190492 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52214: no serving certificate available for the kubelet" Feb 24 10:32:33 crc kubenswrapper[4755]: I0224 10:32:33.558753 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52224: no serving certificate available for the kubelet" Feb 24 10:32:34 crc kubenswrapper[4755]: I0224 10:32:34.251194 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42506: no serving certificate available for the kubelet" Feb 24 10:32:36 crc kubenswrapper[4755]: I0224 10:32:36.616745 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42520: no serving certificate available for the kubelet" Feb 24 10:32:37 crc kubenswrapper[4755]: I0224 10:32:37.311466 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42536: no serving certificate available for the kubelet" Feb 24 10:32:39 crc kubenswrapper[4755]: I0224 10:32:39.671720 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42548: no serving certificate available for the kubelet" Feb 24 10:32:40 crc kubenswrapper[4755]: I0224 10:32:40.372308 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42562: no serving certificate available for the kubelet" Feb 24 10:32:42 crc kubenswrapper[4755]: I0224 10:32:42.727796 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42578: no serving certificate available for the kubelet" Feb 24 10:32:43 crc kubenswrapper[4755]: I0224 10:32:43.425743 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42580: no serving certificate available for the kubelet" Feb 24 10:32:45 crc kubenswrapper[4755]: I0224 10:32:45.792430 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46778: no serving certificate available for the kubelet" Feb 24 10:32:46 crc kubenswrapper[4755]: I0224 10:32:46.519687 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46788: no serving certificate available for the kubelet" Feb 24 10:32:48 crc kubenswrapper[4755]: I0224 10:32:48.840975 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46810: no serving certificate available for the kubelet" Feb 24 10:32:49 crc kubenswrapper[4755]: I0224 10:32:49.579790 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46822: no serving certificate available for the kubelet" Feb 24 10:32:51 crc kubenswrapper[4755]: I0224 10:32:51.888801 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46836: no serving certificate available for the kubelet" Feb 24 10:32:52 crc kubenswrapper[4755]: I0224 10:32:52.636019 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46848: no serving certificate available for the kubelet" Feb 24 10:32:54 crc kubenswrapper[4755]: I0224 10:32:54.957179 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44212: no serving certificate available for the kubelet" Feb 24 10:32:55 crc kubenswrapper[4755]: I0224 10:32:55.688127 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44216: no serving certificate available for the kubelet" Feb 24 10:32:58 crc kubenswrapper[4755]: I0224 10:32:58.005889 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44224: no serving certificate available for the kubelet" Feb 24 10:32:58 crc kubenswrapper[4755]: I0224 10:32:58.734750 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44234: no serving certificate available for the kubelet" Feb 24 10:32:58 crc kubenswrapper[4755]: I0224 10:32:58.967596 4755 scope.go:117] "RemoveContainer" containerID="dc3087fdbca8573e43a0a7df6433c5c43b2044367723320bd61f266da722a033" Feb 24 10:32:58 crc kubenswrapper[4755]: I0224 10:32:58.995998 4755 scope.go:117] "RemoveContainer" containerID="6fbd103e0acd0accd95fe356f43cf0adedea225e834eb52170a66f2a9e597ec7" Feb 24 10:32:59 crc kubenswrapper[4755]: I0224 10:32:59.024949 4755 scope.go:117] "RemoveContainer" containerID="0a382a9588c44b9b328d88453fc1ff481a87a8b13321eae33bf7b4f44930ef4d" Feb 24 10:33:01 crc kubenswrapper[4755]: I0224 10:33:01.052850 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44250: no serving certificate available for the kubelet" Feb 24 10:33:01 crc kubenswrapper[4755]: I0224 10:33:01.802056 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44260: no serving certificate available for the kubelet" Feb 24 10:33:04 crc kubenswrapper[4755]: I0224 10:33:04.105300 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37652: no serving certificate available for the kubelet" Feb 24 10:33:04 crc kubenswrapper[4755]: I0224 10:33:04.854020 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37668: no serving certificate available for the kubelet" Feb 24 10:33:07 crc kubenswrapper[4755]: I0224 10:33:07.143291 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37680: no serving certificate available for the kubelet" Feb 24 10:33:07 crc kubenswrapper[4755]: I0224 10:33:07.896445 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37690: no serving certificate available for the kubelet" Feb 24 10:33:10 crc kubenswrapper[4755]: I0224 10:33:10.179615 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37702: no serving certificate available for the kubelet" Feb 24 10:33:10 crc kubenswrapper[4755]: I0224 10:33:10.948958 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37708: no serving certificate available for the kubelet" Feb 24 10:33:13 crc kubenswrapper[4755]: I0224 10:33:13.230647 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37710: no serving certificate available for the kubelet" Feb 24 10:33:14 crc kubenswrapper[4755]: I0224 10:33:14.006188 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46100: no serving certificate available for the kubelet" Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.287626 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9p8vw"] Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.290603 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46106: no serving certificate available for the kubelet" Feb 24 10:33:16 crc kubenswrapper[4755]: E0224 10:33:16.295852 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1da4ebf-91f7-4419-879c-692efc476e8d" containerName="collect-profiles" Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.295870 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1da4ebf-91f7-4419-879c-692efc476e8d" containerName="collect-profiles" Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.296040 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1da4ebf-91f7-4419-879c-692efc476e8d" containerName="collect-profiles" Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.297180 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.297597 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9p8vw"] Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.394415 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34ace744-fff5-453f-a7aa-27041238a22e-catalog-content\") pod \"redhat-operators-9p8vw\" (UID: \"34ace744-fff5-453f-a7aa-27041238a22e\") " pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.394500 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34ace744-fff5-453f-a7aa-27041238a22e-utilities\") pod \"redhat-operators-9p8vw\" (UID: \"34ace744-fff5-453f-a7aa-27041238a22e\") " pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.394545 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4x65\" (UniqueName: \"kubernetes.io/projected/34ace744-fff5-453f-a7aa-27041238a22e-kube-api-access-v4x65\") pod \"redhat-operators-9p8vw\" (UID: \"34ace744-fff5-453f-a7aa-27041238a22e\") " pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.495824 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34ace744-fff5-453f-a7aa-27041238a22e-catalog-content\") pod \"redhat-operators-9p8vw\" (UID: \"34ace744-fff5-453f-a7aa-27041238a22e\") " pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.495931 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34ace744-fff5-453f-a7aa-27041238a22e-utilities\") pod \"redhat-operators-9p8vw\" (UID: \"34ace744-fff5-453f-a7aa-27041238a22e\") " pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.495993 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4x65\" (UniqueName: \"kubernetes.io/projected/34ace744-fff5-453f-a7aa-27041238a22e-kube-api-access-v4x65\") pod \"redhat-operators-9p8vw\" (UID: \"34ace744-fff5-453f-a7aa-27041238a22e\") " pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.496627 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34ace744-fff5-453f-a7aa-27041238a22e-utilities\") pod \"redhat-operators-9p8vw\" (UID: \"34ace744-fff5-453f-a7aa-27041238a22e\") " pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.496599 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34ace744-fff5-453f-a7aa-27041238a22e-catalog-content\") pod \"redhat-operators-9p8vw\" (UID: \"34ace744-fff5-453f-a7aa-27041238a22e\") " pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.527183 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4x65\" (UniqueName: \"kubernetes.io/projected/34ace744-fff5-453f-a7aa-27041238a22e-kube-api-access-v4x65\") pod \"redhat-operators-9p8vw\" (UID: \"34ace744-fff5-453f-a7aa-27041238a22e\") " pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:16 crc kubenswrapper[4755]: I0224 10:33:16.630091 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:17 crc kubenswrapper[4755]: I0224 10:33:17.046297 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46120: no serving certificate available for the kubelet" Feb 24 10:33:17 crc kubenswrapper[4755]: I0224 10:33:17.179510 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9p8vw"] Feb 24 10:33:17 crc kubenswrapper[4755]: I0224 10:33:17.923876 4755 generic.go:334] "Generic (PLEG): container finished" podID="34ace744-fff5-453f-a7aa-27041238a22e" containerID="b6fda1e1a76d7bb837eca6801529100951e12e51a678ff84d4256090c680f0d9" exitCode=0 Feb 24 10:33:17 crc kubenswrapper[4755]: I0224 10:33:17.923956 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p8vw" event={"ID":"34ace744-fff5-453f-a7aa-27041238a22e","Type":"ContainerDied","Data":"b6fda1e1a76d7bb837eca6801529100951e12e51a678ff84d4256090c680f0d9"} Feb 24 10:33:17 crc kubenswrapper[4755]: I0224 10:33:17.924207 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p8vw" event={"ID":"34ace744-fff5-453f-a7aa-27041238a22e","Type":"ContainerStarted","Data":"272d48906feb05260e1229f605adc968245e22802d99178d4a7d418fcda8d174"} Feb 24 10:33:17 crc kubenswrapper[4755]: I0224 10:33:17.927620 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 10:33:18 crc kubenswrapper[4755]: I0224 10:33:18.934320 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p8vw" event={"ID":"34ace744-fff5-453f-a7aa-27041238a22e","Type":"ContainerStarted","Data":"c0e3bb2175c01fe1bd4b0a5e7d1aa5f40b1a606e394b6b2c8287c9e79b7f2b2a"} Feb 24 10:33:19 crc kubenswrapper[4755]: I0224 10:33:19.341737 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46134: no serving certificate available for the kubelet" Feb 24 10:33:20 crc kubenswrapper[4755]: I0224 10:33:20.095893 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46140: no serving certificate available for the kubelet" Feb 24 10:33:21 crc kubenswrapper[4755]: I0224 10:33:21.969941 4755 generic.go:334] "Generic (PLEG): container finished" podID="34ace744-fff5-453f-a7aa-27041238a22e" containerID="c0e3bb2175c01fe1bd4b0a5e7d1aa5f40b1a606e394b6b2c8287c9e79b7f2b2a" exitCode=0 Feb 24 10:33:21 crc kubenswrapper[4755]: I0224 10:33:21.970004 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p8vw" event={"ID":"34ace744-fff5-453f-a7aa-27041238a22e","Type":"ContainerDied","Data":"c0e3bb2175c01fe1bd4b0a5e7d1aa5f40b1a606e394b6b2c8287c9e79b7f2b2a"} Feb 24 10:33:22 crc kubenswrapper[4755]: I0224 10:33:22.396554 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46144: no serving certificate available for the kubelet" Feb 24 10:33:22 crc kubenswrapper[4755]: I0224 10:33:22.980842 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p8vw" event={"ID":"34ace744-fff5-453f-a7aa-27041238a22e","Type":"ContainerStarted","Data":"a9047ff01731431dd7fca88738bd992157cb49722c91741cc0aefed604bdbfcf"} Feb 24 10:33:23 crc kubenswrapper[4755]: I0224 10:33:23.014328 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9p8vw" podStartSLOduration=2.56375936 podStartE2EDuration="7.014298414s" podCreationTimestamp="2026-02-24 10:33:16 +0000 UTC" firstStartedPulling="2026-02-24 10:33:17.927376595 +0000 UTC m=+2302.383899138" lastFinishedPulling="2026-02-24 10:33:22.377915649 +0000 UTC m=+2306.834438192" observedRunningTime="2026-02-24 10:33:23.007488553 +0000 UTC m=+2307.464011166" watchObservedRunningTime="2026-02-24 10:33:23.014298414 +0000 UTC m=+2307.470821007" Feb 24 10:33:23 crc kubenswrapper[4755]: I0224 10:33:23.152511 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46158: no serving certificate available for the kubelet" Feb 24 10:33:25 crc kubenswrapper[4755]: I0224 10:33:25.449710 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44308: no serving certificate available for the kubelet" Feb 24 10:33:26 crc kubenswrapper[4755]: I0224 10:33:26.200188 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44320: no serving certificate available for the kubelet" Feb 24 10:33:26 crc kubenswrapper[4755]: I0224 10:33:26.630499 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:26 crc kubenswrapper[4755]: I0224 10:33:26.630610 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:27 crc kubenswrapper[4755]: I0224 10:33:27.701167 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9p8vw" podUID="34ace744-fff5-453f-a7aa-27041238a22e" containerName="registry-server" probeResult="failure" output=< Feb 24 10:33:27 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Feb 24 10:33:27 crc kubenswrapper[4755]: > Feb 24 10:33:28 crc kubenswrapper[4755]: I0224 10:33:28.518157 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44326: no serving certificate available for the kubelet" Feb 24 10:33:29 crc kubenswrapper[4755]: I0224 10:33:29.248324 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44338: no serving certificate available for the kubelet" Feb 24 10:33:31 crc kubenswrapper[4755]: I0224 10:33:31.579010 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44340: no serving certificate available for the kubelet" Feb 24 10:33:32 crc kubenswrapper[4755]: I0224 10:33:32.303367 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44344: no serving certificate available for the kubelet" Feb 24 10:33:34 crc kubenswrapper[4755]: I0224 10:33:34.645399 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57774: no serving certificate available for the kubelet" Feb 24 10:33:35 crc kubenswrapper[4755]: I0224 10:33:35.512940 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57790: no serving certificate available for the kubelet" Feb 24 10:33:35 crc kubenswrapper[4755]: I0224 10:33:35.527629 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57798: no serving certificate available for the kubelet" Feb 24 10:33:36 crc kubenswrapper[4755]: I0224 10:33:36.708856 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:36 crc kubenswrapper[4755]: I0224 10:33:36.782938 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:36 crc kubenswrapper[4755]: I0224 10:33:36.959053 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9p8vw"] Feb 24 10:33:37 crc kubenswrapper[4755]: I0224 10:33:37.704988 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57804: no serving certificate available for the kubelet" Feb 24 10:33:38 crc kubenswrapper[4755]: I0224 10:33:38.121755 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9p8vw" podUID="34ace744-fff5-453f-a7aa-27041238a22e" containerName="registry-server" containerID="cri-o://a9047ff01731431dd7fca88738bd992157cb49722c91741cc0aefed604bdbfcf" gracePeriod=2 Feb 24 10:33:38 crc kubenswrapper[4755]: I0224 10:33:38.576118 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57812: no serving certificate available for the kubelet" Feb 24 10:33:38 crc kubenswrapper[4755]: I0224 10:33:38.699096 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:38 crc kubenswrapper[4755]: I0224 10:33:38.811993 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34ace744-fff5-453f-a7aa-27041238a22e-utilities\") pod \"34ace744-fff5-453f-a7aa-27041238a22e\" (UID: \"34ace744-fff5-453f-a7aa-27041238a22e\") " Feb 24 10:33:38 crc kubenswrapper[4755]: I0224 10:33:38.812144 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34ace744-fff5-453f-a7aa-27041238a22e-catalog-content\") pod \"34ace744-fff5-453f-a7aa-27041238a22e\" (UID: \"34ace744-fff5-453f-a7aa-27041238a22e\") " Feb 24 10:33:38 crc kubenswrapper[4755]: I0224 10:33:38.812206 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4x65\" (UniqueName: \"kubernetes.io/projected/34ace744-fff5-453f-a7aa-27041238a22e-kube-api-access-v4x65\") pod \"34ace744-fff5-453f-a7aa-27041238a22e\" (UID: \"34ace744-fff5-453f-a7aa-27041238a22e\") " Feb 24 10:33:38 crc kubenswrapper[4755]: I0224 10:33:38.812986 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34ace744-fff5-453f-a7aa-27041238a22e-utilities" (OuterVolumeSpecName: "utilities") pod "34ace744-fff5-453f-a7aa-27041238a22e" (UID: "34ace744-fff5-453f-a7aa-27041238a22e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:33:38 crc kubenswrapper[4755]: I0224 10:33:38.834372 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ace744-fff5-453f-a7aa-27041238a22e-kube-api-access-v4x65" (OuterVolumeSpecName: "kube-api-access-v4x65") pod "34ace744-fff5-453f-a7aa-27041238a22e" (UID: "34ace744-fff5-453f-a7aa-27041238a22e"). InnerVolumeSpecName "kube-api-access-v4x65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:33:38 crc kubenswrapper[4755]: I0224 10:33:38.914999 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4x65\" (UniqueName: \"kubernetes.io/projected/34ace744-fff5-453f-a7aa-27041238a22e-kube-api-access-v4x65\") on node \"crc\" DevicePath \"\"" Feb 24 10:33:38 crc kubenswrapper[4755]: I0224 10:33:38.915054 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34ace744-fff5-453f-a7aa-27041238a22e-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.008324 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34ace744-fff5-453f-a7aa-27041238a22e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34ace744-fff5-453f-a7aa-27041238a22e" (UID: "34ace744-fff5-453f-a7aa-27041238a22e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.017194 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34ace744-fff5-453f-a7aa-27041238a22e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.137135 4755 generic.go:334] "Generic (PLEG): container finished" podID="34ace744-fff5-453f-a7aa-27041238a22e" containerID="a9047ff01731431dd7fca88738bd992157cb49722c91741cc0aefed604bdbfcf" exitCode=0 Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.137228 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9p8vw" Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.137252 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p8vw" event={"ID":"34ace744-fff5-453f-a7aa-27041238a22e","Type":"ContainerDied","Data":"a9047ff01731431dd7fca88738bd992157cb49722c91741cc0aefed604bdbfcf"} Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.137835 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p8vw" event={"ID":"34ace744-fff5-453f-a7aa-27041238a22e","Type":"ContainerDied","Data":"272d48906feb05260e1229f605adc968245e22802d99178d4a7d418fcda8d174"} Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.137876 4755 scope.go:117] "RemoveContainer" containerID="a9047ff01731431dd7fca88738bd992157cb49722c91741cc0aefed604bdbfcf" Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.169709 4755 scope.go:117] "RemoveContainer" containerID="c0e3bb2175c01fe1bd4b0a5e7d1aa5f40b1a606e394b6b2c8287c9e79b7f2b2a" Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.196597 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9p8vw"] Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.207816 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9p8vw"] Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.215046 4755 scope.go:117] "RemoveContainer" containerID="b6fda1e1a76d7bb837eca6801529100951e12e51a678ff84d4256090c680f0d9" Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.266496 4755 scope.go:117] "RemoveContainer" containerID="a9047ff01731431dd7fca88738bd992157cb49722c91741cc0aefed604bdbfcf" Feb 24 10:33:39 crc kubenswrapper[4755]: E0224 10:33:39.267224 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9047ff01731431dd7fca88738bd992157cb49722c91741cc0aefed604bdbfcf\": container with ID starting with a9047ff01731431dd7fca88738bd992157cb49722c91741cc0aefed604bdbfcf not found: ID does not exist" containerID="a9047ff01731431dd7fca88738bd992157cb49722c91741cc0aefed604bdbfcf" Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.267305 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9047ff01731431dd7fca88738bd992157cb49722c91741cc0aefed604bdbfcf"} err="failed to get container status \"a9047ff01731431dd7fca88738bd992157cb49722c91741cc0aefed604bdbfcf\": rpc error: code = NotFound desc = could not find container \"a9047ff01731431dd7fca88738bd992157cb49722c91741cc0aefed604bdbfcf\": container with ID starting with a9047ff01731431dd7fca88738bd992157cb49722c91741cc0aefed604bdbfcf not found: ID does not exist" Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.267357 4755 scope.go:117] "RemoveContainer" containerID="c0e3bb2175c01fe1bd4b0a5e7d1aa5f40b1a606e394b6b2c8287c9e79b7f2b2a" Feb 24 10:33:39 crc kubenswrapper[4755]: E0224 10:33:39.267899 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e3bb2175c01fe1bd4b0a5e7d1aa5f40b1a606e394b6b2c8287c9e79b7f2b2a\": container with ID starting with c0e3bb2175c01fe1bd4b0a5e7d1aa5f40b1a606e394b6b2c8287c9e79b7f2b2a not found: ID does not exist" containerID="c0e3bb2175c01fe1bd4b0a5e7d1aa5f40b1a606e394b6b2c8287c9e79b7f2b2a" Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.267948 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e3bb2175c01fe1bd4b0a5e7d1aa5f40b1a606e394b6b2c8287c9e79b7f2b2a"} err="failed to get container status \"c0e3bb2175c01fe1bd4b0a5e7d1aa5f40b1a606e394b6b2c8287c9e79b7f2b2a\": rpc error: code = NotFound desc = could not find container \"c0e3bb2175c01fe1bd4b0a5e7d1aa5f40b1a606e394b6b2c8287c9e79b7f2b2a\": container with ID starting with c0e3bb2175c01fe1bd4b0a5e7d1aa5f40b1a606e394b6b2c8287c9e79b7f2b2a not found: ID does not exist" Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.267980 4755 scope.go:117] "RemoveContainer" containerID="b6fda1e1a76d7bb837eca6801529100951e12e51a678ff84d4256090c680f0d9" Feb 24 10:33:39 crc kubenswrapper[4755]: E0224 10:33:39.268502 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6fda1e1a76d7bb837eca6801529100951e12e51a678ff84d4256090c680f0d9\": container with ID starting with b6fda1e1a76d7bb837eca6801529100951e12e51a678ff84d4256090c680f0d9 not found: ID does not exist" containerID="b6fda1e1a76d7bb837eca6801529100951e12e51a678ff84d4256090c680f0d9" Feb 24 10:33:39 crc kubenswrapper[4755]: I0224 10:33:39.268576 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6fda1e1a76d7bb837eca6801529100951e12e51a678ff84d4256090c680f0d9"} err="failed to get container status \"b6fda1e1a76d7bb837eca6801529100951e12e51a678ff84d4256090c680f0d9\": rpc error: code = NotFound desc = could not find container \"b6fda1e1a76d7bb837eca6801529100951e12e51a678ff84d4256090c680f0d9\": container with ID starting with b6fda1e1a76d7bb837eca6801529100951e12e51a678ff84d4256090c680f0d9 not found: ID does not exist" Feb 24 10:33:40 crc kubenswrapper[4755]: I0224 10:33:40.335759 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ace744-fff5-453f-a7aa-27041238a22e" path="/var/lib/kubelet/pods/34ace744-fff5-453f-a7aa-27041238a22e/volumes" Feb 24 10:33:40 crc kubenswrapper[4755]: I0224 10:33:40.756813 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57822: no serving certificate available for the kubelet" Feb 24 10:33:41 crc kubenswrapper[4755]: I0224 10:33:41.636583 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57836: no serving certificate available for the kubelet" Feb 24 10:33:43 crc kubenswrapper[4755]: I0224 10:33:43.828691 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56080: no serving certificate available for the kubelet" Feb 24 10:33:44 crc kubenswrapper[4755]: I0224 10:33:44.692230 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56090: no serving certificate available for the kubelet" Feb 24 10:33:46 crc kubenswrapper[4755]: I0224 10:33:46.879477 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56098: no serving certificate available for the kubelet" Feb 24 10:33:47 crc kubenswrapper[4755]: I0224 10:33:47.750292 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56100: no serving certificate available for the kubelet" Feb 24 10:33:49 crc kubenswrapper[4755]: I0224 10:33:49.933909 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56108: no serving certificate available for the kubelet" Feb 24 10:33:50 crc kubenswrapper[4755]: I0224 10:33:50.799409 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56122: no serving certificate available for the kubelet" Feb 24 10:33:51 crc kubenswrapper[4755]: I0224 10:33:51.694612 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:33:51 crc kubenswrapper[4755]: I0224 10:33:51.694697 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:33:52 crc kubenswrapper[4755]: I0224 10:33:52.993237 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56132: no serving certificate available for the kubelet" Feb 24 10:33:53 crc kubenswrapper[4755]: I0224 10:33:53.864061 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60956: no serving certificate available for the kubelet" Feb 24 10:33:56 crc kubenswrapper[4755]: I0224 10:33:56.052463 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60972: no serving certificate available for the kubelet" Feb 24 10:33:56 crc kubenswrapper[4755]: I0224 10:33:56.921488 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60982: no serving certificate available for the kubelet" Feb 24 10:33:59 crc kubenswrapper[4755]: I0224 10:33:59.107702 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60994: no serving certificate available for the kubelet" Feb 24 10:33:59 crc kubenswrapper[4755]: I0224 10:33:59.964372 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32774: no serving certificate available for the kubelet" Feb 24 10:34:02 crc kubenswrapper[4755]: I0224 10:34:02.170262 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32780: no serving certificate available for the kubelet" Feb 24 10:34:03 crc kubenswrapper[4755]: I0224 10:34:03.016256 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32782: no serving certificate available for the kubelet" Feb 24 10:34:05 crc kubenswrapper[4755]: I0224 10:34:05.228479 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42934: no serving certificate available for the kubelet" Feb 24 10:34:06 crc kubenswrapper[4755]: I0224 10:34:06.057945 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42940: no serving certificate available for the kubelet" Feb 24 10:34:08 crc kubenswrapper[4755]: I0224 10:34:08.286552 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42954: no serving certificate available for the kubelet" Feb 24 10:34:09 crc kubenswrapper[4755]: I0224 10:34:09.114816 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42964: no serving certificate available for the kubelet" Feb 24 10:34:11 crc kubenswrapper[4755]: I0224 10:34:11.345135 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42980: no serving certificate available for the kubelet" Feb 24 10:34:11 crc kubenswrapper[4755]: I0224 10:34:11.492116 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42992: no serving certificate available for the kubelet" Feb 24 10:34:12 crc kubenswrapper[4755]: I0224 10:34:12.170123 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43006: no serving certificate available for the kubelet" Feb 24 10:34:14 crc kubenswrapper[4755]: I0224 10:34:14.391833 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36072: no serving certificate available for the kubelet" Feb 24 10:34:15 crc kubenswrapper[4755]: I0224 10:34:15.226359 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36076: no serving certificate available for the kubelet" Feb 24 10:34:17 crc kubenswrapper[4755]: I0224 10:34:17.427848 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36084: no serving certificate available for the kubelet" Feb 24 10:34:18 crc kubenswrapper[4755]: I0224 10:34:18.285919 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36090: no serving certificate available for the kubelet" Feb 24 10:34:20 crc kubenswrapper[4755]: I0224 10:34:20.476916 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36100: no serving certificate available for the kubelet" Feb 24 10:34:21 crc kubenswrapper[4755]: I0224 10:34:21.345262 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36104: no serving certificate available for the kubelet" Feb 24 10:34:21 crc kubenswrapper[4755]: I0224 10:34:21.695316 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:34:21 crc kubenswrapper[4755]: I0224 10:34:21.695428 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:34:23 crc kubenswrapper[4755]: I0224 10:34:23.530802 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36120: no serving certificate available for the kubelet" Feb 24 10:34:24 crc kubenswrapper[4755]: I0224 10:34:24.404465 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46282: no serving certificate available for the kubelet" Feb 24 10:34:26 crc kubenswrapper[4755]: I0224 10:34:26.567433 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46292: no serving certificate available for the kubelet" Feb 24 10:34:27 crc kubenswrapper[4755]: I0224 10:34:27.457616 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46308: no serving certificate available for the kubelet" Feb 24 10:34:29 crc kubenswrapper[4755]: I0224 10:34:29.627365 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46316: no serving certificate available for the kubelet" Feb 24 10:34:30 crc kubenswrapper[4755]: I0224 10:34:30.502089 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46326: no serving certificate available for the kubelet" Feb 24 10:34:32 crc kubenswrapper[4755]: I0224 10:34:32.711828 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46328: no serving certificate available for the kubelet" Feb 24 10:34:33 crc kubenswrapper[4755]: I0224 10:34:33.553114 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46334: no serving certificate available for the kubelet" Feb 24 10:34:35 crc kubenswrapper[4755]: I0224 10:34:35.759725 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54176: no serving certificate available for the kubelet" Feb 24 10:34:36 crc kubenswrapper[4755]: I0224 10:34:36.614663 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54178: no serving certificate available for the kubelet" Feb 24 10:34:38 crc kubenswrapper[4755]: I0224 10:34:38.818559 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54194: no serving certificate available for the kubelet" Feb 24 10:34:39 crc kubenswrapper[4755]: I0224 10:34:39.431272 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" probeResult="failure" output=< Feb 24 10:34:39 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:34:39 crc kubenswrapper[4755]: > Feb 24 10:34:39 crc kubenswrapper[4755]: I0224 10:34:39.431701 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:34:39 crc kubenswrapper[4755]: I0224 10:34:39.432697 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"129999cf2c6f5d78b8ad9cb4b1c99cac169f2012398f47f0cbba1aba19702257"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:34:39 crc kubenswrapper[4755]: I0224 10:34:39.511626 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" containerID="cri-o://129999cf2c6f5d78b8ad9cb4b1c99cac169f2012398f47f0cbba1aba19702257" gracePeriod=30 Feb 24 10:34:39 crc kubenswrapper[4755]: I0224 10:34:39.677432 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54200: no serving certificate available for the kubelet" Feb 24 10:34:39 crc kubenswrapper[4755]: I0224 10:34:39.766534 4755 generic.go:334] "Generic (PLEG): container finished" podID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerID="129999cf2c6f5d78b8ad9cb4b1c99cac169f2012398f47f0cbba1aba19702257" exitCode=143 Feb 24 10:34:39 crc kubenswrapper[4755]: I0224 10:34:39.766587 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerDied","Data":"129999cf2c6f5d78b8ad9cb4b1c99cac169f2012398f47f0cbba1aba19702257"} Feb 24 10:34:39 crc kubenswrapper[4755]: I0224 10:34:39.766636 4755 scope.go:117] "RemoveContainer" containerID="1dcc6a0e5a48ba2150e9af6809b5a45f82baa846e848c62c60145a8600dce932" Feb 24 10:34:40 crc kubenswrapper[4755]: I0224 10:34:40.780457 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerStarted","Data":"800f98376a75908f371f36101dfbbd70164bb3acb8517901561bd0e5eddf4c78"} Feb 24 10:34:41 crc kubenswrapper[4755]: I0224 10:34:41.855888 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54208: no serving certificate available for the kubelet" Feb 24 10:34:42 crc kubenswrapper[4755]: I0224 10:34:42.796256 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54216: no serving certificate available for the kubelet" Feb 24 10:34:42 crc kubenswrapper[4755]: I0224 10:34:42.881414 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" probeResult="failure" output=< Feb 24 10:34:42 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:34:42 crc kubenswrapper[4755]: > Feb 24 10:34:42 crc kubenswrapper[4755]: I0224 10:34:42.881485 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:34:42 crc kubenswrapper[4755]: I0224 10:34:42.882046 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"a8d889c52bcf05c875239ad26fdbe4df1a47c88c54129a3d9d3a9cf0739230fe"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:34:42 crc kubenswrapper[4755]: I0224 10:34:42.958394 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" containerID="cri-o://a8d889c52bcf05c875239ad26fdbe4df1a47c88c54129a3d9d3a9cf0739230fe" gracePeriod=30 Feb 24 10:34:43 crc kubenswrapper[4755]: I0224 10:34:43.816287 4755 generic.go:334] "Generic (PLEG): container finished" podID="2f320527-691f-48e9-a243-f60bc805da39" containerID="a8d889c52bcf05c875239ad26fdbe4df1a47c88c54129a3d9d3a9cf0739230fe" exitCode=143 Feb 24 10:34:43 crc kubenswrapper[4755]: I0224 10:34:43.816331 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerDied","Data":"a8d889c52bcf05c875239ad26fdbe4df1a47c88c54129a3d9d3a9cf0739230fe"} Feb 24 10:34:43 crc kubenswrapper[4755]: I0224 10:34:43.816977 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerStarted","Data":"249cff2e52b9673481d5d24fc8158108cdf8f7eb45d935cdf1ec11a460574df5"} Feb 24 10:34:43 crc kubenswrapper[4755]: I0224 10:34:43.817022 4755 scope.go:117] "RemoveContainer" containerID="1beb9a67cbcfc1185f8eeecd3f6602727d0ca7b2e48e874d65d187bbd5cdc985" Feb 24 10:34:44 crc kubenswrapper[4755]: I0224 10:34:44.900369 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40676: no serving certificate available for the kubelet" Feb 24 10:34:45 crc kubenswrapper[4755]: I0224 10:34:45.856589 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40692: no serving certificate available for the kubelet" Feb 24 10:34:47 crc kubenswrapper[4755]: I0224 10:34:47.940134 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40702: no serving certificate available for the kubelet" Feb 24 10:34:48 crc kubenswrapper[4755]: I0224 10:34:48.907774 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40716: no serving certificate available for the kubelet" Feb 24 10:34:49 crc kubenswrapper[4755]: I0224 10:34:49.887248 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:34:49 crc kubenswrapper[4755]: I0224 10:34:49.887682 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 10:34:50 crc kubenswrapper[4755]: I0224 10:34:50.989562 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40732: no serving certificate available for the kubelet" Feb 24 10:34:51 crc kubenswrapper[4755]: I0224 10:34:51.289480 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 10:34:51 crc kubenswrapper[4755]: I0224 10:34:51.289558 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:34:51 crc kubenswrapper[4755]: I0224 10:34:51.695242 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:34:51 crc kubenswrapper[4755]: I0224 10:34:51.695355 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:34:51 crc kubenswrapper[4755]: I0224 10:34:51.695441 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 10:34:51 crc kubenswrapper[4755]: I0224 10:34:51.696698 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed"} pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 10:34:51 crc kubenswrapper[4755]: I0224 10:34:51.696808 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" containerID="cri-o://3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" gracePeriod=600 Feb 24 10:34:51 crc kubenswrapper[4755]: E0224 10:34:51.854173 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:34:51 crc kubenswrapper[4755]: I0224 10:34:51.903688 4755 generic.go:334] "Generic (PLEG): container finished" podID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" exitCode=0 Feb 24 10:34:51 crc kubenswrapper[4755]: I0224 10:34:51.903773 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerDied","Data":"3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed"} Feb 24 10:34:51 crc kubenswrapper[4755]: I0224 10:34:51.903919 4755 scope.go:117] "RemoveContainer" containerID="3ce3dda87a828ebb337745ef70df8dd0ea69f2104d8a6f36ce5c4c74f7a0bf28" Feb 24 10:34:51 crc kubenswrapper[4755]: I0224 10:34:51.904709 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:34:51 crc kubenswrapper[4755]: E0224 10:34:51.905257 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:34:51 crc kubenswrapper[4755]: I0224 10:34:51.968045 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40742: no serving certificate available for the kubelet" Feb 24 10:34:53 crc kubenswrapper[4755]: I0224 10:34:53.803809 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33068: no serving certificate available for the kubelet" Feb 24 10:34:54 crc kubenswrapper[4755]: I0224 10:34:54.031991 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33082: no serving certificate available for the kubelet" Feb 24 10:34:55 crc kubenswrapper[4755]: I0224 10:34:55.019284 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33092: no serving certificate available for the kubelet" Feb 24 10:34:57 crc kubenswrapper[4755]: I0224 10:34:57.072522 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33094: no serving certificate available for the kubelet" Feb 24 10:34:58 crc kubenswrapper[4755]: I0224 10:34:58.055246 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33106: no serving certificate available for the kubelet" Feb 24 10:35:00 crc kubenswrapper[4755]: I0224 10:35:00.128855 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33122: no serving certificate available for the kubelet" Feb 24 10:35:01 crc kubenswrapper[4755]: I0224 10:35:01.088864 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33130: no serving certificate available for the kubelet" Feb 24 10:35:03 crc kubenswrapper[4755]: I0224 10:35:03.168445 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33142: no serving certificate available for the kubelet" Feb 24 10:35:03 crc kubenswrapper[4755]: I0224 10:35:03.316223 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:35:03 crc kubenswrapper[4755]: E0224 10:35:03.316477 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:35:04 crc kubenswrapper[4755]: I0224 10:35:04.130307 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56188: no serving certificate available for the kubelet" Feb 24 10:35:06 crc kubenswrapper[4755]: I0224 10:35:06.217221 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56198: no serving certificate available for the kubelet" Feb 24 10:35:07 crc kubenswrapper[4755]: I0224 10:35:07.163182 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56210: no serving certificate available for the kubelet" Feb 24 10:35:09 crc kubenswrapper[4755]: I0224 10:35:09.252213 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56224: no serving certificate available for the kubelet" Feb 24 10:35:10 crc kubenswrapper[4755]: I0224 10:35:10.223143 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56228: no serving certificate available for the kubelet" Feb 24 10:35:12 crc kubenswrapper[4755]: I0224 10:35:12.299345 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56236: no serving certificate available for the kubelet" Feb 24 10:35:13 crc kubenswrapper[4755]: I0224 10:35:13.275549 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56242: no serving certificate available for the kubelet" Feb 24 10:35:15 crc kubenswrapper[4755]: I0224 10:35:15.354102 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47070: no serving certificate available for the kubelet" Feb 24 10:35:16 crc kubenswrapper[4755]: I0224 10:35:16.325996 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47076: no serving certificate available for the kubelet" Feb 24 10:35:18 crc kubenswrapper[4755]: I0224 10:35:18.316900 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:35:18 crc kubenswrapper[4755]: E0224 10:35:18.317270 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:35:18 crc kubenswrapper[4755]: I0224 10:35:18.402838 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47088: no serving certificate available for the kubelet" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.367032 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47104: no serving certificate available for the kubelet" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.461100 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ksxkb"] Feb 24 10:35:19 crc kubenswrapper[4755]: E0224 10:35:19.461720 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ace744-fff5-453f-a7aa-27041238a22e" containerName="extract-utilities" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.461783 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ace744-fff5-453f-a7aa-27041238a22e" containerName="extract-utilities" Feb 24 10:35:19 crc kubenswrapper[4755]: E0224 10:35:19.461832 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ace744-fff5-453f-a7aa-27041238a22e" containerName="registry-server" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.461875 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ace744-fff5-453f-a7aa-27041238a22e" containerName="registry-server" Feb 24 10:35:19 crc kubenswrapper[4755]: E0224 10:35:19.461906 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ace744-fff5-453f-a7aa-27041238a22e" containerName="extract-content" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.461919 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ace744-fff5-453f-a7aa-27041238a22e" containerName="extract-content" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.462483 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ace744-fff5-453f-a7aa-27041238a22e" containerName="registry-server" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.465586 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.496500 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ksxkb"] Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.654929 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8cr7\" (UniqueName: \"kubernetes.io/projected/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-kube-api-access-s8cr7\") pod \"community-operators-ksxkb\" (UID: \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\") " pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.655021 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-catalog-content\") pod \"community-operators-ksxkb\" (UID: \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\") " pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.655114 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-utilities\") pod \"community-operators-ksxkb\" (UID: \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\") " pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.756962 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8cr7\" (UniqueName: \"kubernetes.io/projected/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-kube-api-access-s8cr7\") pod \"community-operators-ksxkb\" (UID: \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\") " pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.757017 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-catalog-content\") pod \"community-operators-ksxkb\" (UID: \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\") " pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.757051 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-utilities\") pod \"community-operators-ksxkb\" (UID: \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\") " pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.757666 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-catalog-content\") pod \"community-operators-ksxkb\" (UID: \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\") " pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.757747 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-utilities\") pod \"community-operators-ksxkb\" (UID: \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\") " pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.785883 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8cr7\" (UniqueName: \"kubernetes.io/projected/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-kube-api-access-s8cr7\") pod \"community-operators-ksxkb\" (UID: \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\") " pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:19 crc kubenswrapper[4755]: I0224 10:35:19.803785 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:20 crc kubenswrapper[4755]: W0224 10:35:20.336748 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod119b3404_3f9e_4ac2_8657_7fb76d7eeb82.slice/crio-4dcc7458de86b056777b43630504bfcc07622e38c6e563f509a1c60f43c48574 WatchSource:0}: Error finding container 4dcc7458de86b056777b43630504bfcc07622e38c6e563f509a1c60f43c48574: Status 404 returned error can't find the container with id 4dcc7458de86b056777b43630504bfcc07622e38c6e563f509a1c60f43c48574 Feb 24 10:35:20 crc kubenswrapper[4755]: I0224 10:35:20.344767 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ksxkb"] Feb 24 10:35:21 crc kubenswrapper[4755]: I0224 10:35:21.168849 4755 generic.go:334] "Generic (PLEG): container finished" podID="119b3404-3f9e-4ac2-8657-7fb76d7eeb82" containerID="ad0c6307d645a879fe275708429c9546f094a2117b1fc4965a5ea42d5987b454" exitCode=0 Feb 24 10:35:21 crc kubenswrapper[4755]: I0224 10:35:21.168963 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksxkb" event={"ID":"119b3404-3f9e-4ac2-8657-7fb76d7eeb82","Type":"ContainerDied","Data":"ad0c6307d645a879fe275708429c9546f094a2117b1fc4965a5ea42d5987b454"} Feb 24 10:35:21 crc kubenswrapper[4755]: I0224 10:35:21.169322 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksxkb" event={"ID":"119b3404-3f9e-4ac2-8657-7fb76d7eeb82","Type":"ContainerStarted","Data":"4dcc7458de86b056777b43630504bfcc07622e38c6e563f509a1c60f43c48574"} Feb 24 10:35:21 crc kubenswrapper[4755]: I0224 10:35:21.468589 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47114: no serving certificate available for the kubelet" Feb 24 10:35:22 crc kubenswrapper[4755]: I0224 10:35:22.182343 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksxkb" event={"ID":"119b3404-3f9e-4ac2-8657-7fb76d7eeb82","Type":"ContainerStarted","Data":"5131efb2f07062695d4d76f0326bb0bb7f98495a0d4930f3e46d853a6c217ee1"} Feb 24 10:35:22 crc kubenswrapper[4755]: I0224 10:35:22.409818 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47128: no serving certificate available for the kubelet" Feb 24 10:35:23 crc kubenswrapper[4755]: I0224 10:35:23.194423 4755 generic.go:334] "Generic (PLEG): container finished" podID="119b3404-3f9e-4ac2-8657-7fb76d7eeb82" containerID="5131efb2f07062695d4d76f0326bb0bb7f98495a0d4930f3e46d853a6c217ee1" exitCode=0 Feb 24 10:35:23 crc kubenswrapper[4755]: I0224 10:35:23.194506 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksxkb" event={"ID":"119b3404-3f9e-4ac2-8657-7fb76d7eeb82","Type":"ContainerDied","Data":"5131efb2f07062695d4d76f0326bb0bb7f98495a0d4930f3e46d853a6c217ee1"} Feb 24 10:35:24 crc kubenswrapper[4755]: I0224 10:35:24.208337 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksxkb" event={"ID":"119b3404-3f9e-4ac2-8657-7fb76d7eeb82","Type":"ContainerStarted","Data":"dfa1d246cfcc58d4c3c5c7929955cf9ef59f4cdcc7082023d03cbfdaf7aa5624"} Feb 24 10:35:24 crc kubenswrapper[4755]: I0224 10:35:24.245740 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ksxkb" podStartSLOduration=2.771353726 podStartE2EDuration="5.245717277s" podCreationTimestamp="2026-02-24 10:35:19 +0000 UTC" firstStartedPulling="2026-02-24 10:35:21.171668046 +0000 UTC m=+2425.628190629" lastFinishedPulling="2026-02-24 10:35:23.646031637 +0000 UTC m=+2428.102554180" observedRunningTime="2026-02-24 10:35:24.236679399 +0000 UTC m=+2428.693201952" watchObservedRunningTime="2026-02-24 10:35:24.245717277 +0000 UTC m=+2428.702239830" Feb 24 10:35:24 crc kubenswrapper[4755]: I0224 10:35:24.509961 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40312: no serving certificate available for the kubelet" Feb 24 10:35:25 crc kubenswrapper[4755]: I0224 10:35:25.454733 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40316: no serving certificate available for the kubelet" Feb 24 10:35:27 crc kubenswrapper[4755]: I0224 10:35:27.578705 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40318: no serving certificate available for the kubelet" Feb 24 10:35:28 crc kubenswrapper[4755]: I0224 10:35:28.506796 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40324: no serving certificate available for the kubelet" Feb 24 10:35:29 crc kubenswrapper[4755]: I0224 10:35:29.316937 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:35:29 crc kubenswrapper[4755]: E0224 10:35:29.317432 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:35:29 crc kubenswrapper[4755]: I0224 10:35:29.804326 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:29 crc kubenswrapper[4755]: I0224 10:35:29.804383 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:29 crc kubenswrapper[4755]: I0224 10:35:29.862081 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:30 crc kubenswrapper[4755]: I0224 10:35:30.328923 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:30 crc kubenswrapper[4755]: I0224 10:35:30.397316 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ksxkb"] Feb 24 10:35:30 crc kubenswrapper[4755]: I0224 10:35:30.617034 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40334: no serving certificate available for the kubelet" Feb 24 10:35:31 crc kubenswrapper[4755]: I0224 10:35:31.547887 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40340: no serving certificate available for the kubelet" Feb 24 10:35:32 crc kubenswrapper[4755]: I0224 10:35:32.281629 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ksxkb" podUID="119b3404-3f9e-4ac2-8657-7fb76d7eeb82" containerName="registry-server" containerID="cri-o://dfa1d246cfcc58d4c3c5c7929955cf9ef59f4cdcc7082023d03cbfdaf7aa5624" gracePeriod=2 Feb 24 10:35:32 crc kubenswrapper[4755]: I0224 10:35:32.689672 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:32 crc kubenswrapper[4755]: I0224 10:35:32.788222 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8cr7\" (UniqueName: \"kubernetes.io/projected/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-kube-api-access-s8cr7\") pod \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\" (UID: \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\") " Feb 24 10:35:32 crc kubenswrapper[4755]: I0224 10:35:32.788513 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-catalog-content\") pod \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\" (UID: \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\") " Feb 24 10:35:32 crc kubenswrapper[4755]: I0224 10:35:32.788648 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-utilities\") pod \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\" (UID: \"119b3404-3f9e-4ac2-8657-7fb76d7eeb82\") " Feb 24 10:35:32 crc kubenswrapper[4755]: I0224 10:35:32.789892 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-utilities" (OuterVolumeSpecName: "utilities") pod "119b3404-3f9e-4ac2-8657-7fb76d7eeb82" (UID: "119b3404-3f9e-4ac2-8657-7fb76d7eeb82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:35:32 crc kubenswrapper[4755]: I0224 10:35:32.797512 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-kube-api-access-s8cr7" (OuterVolumeSpecName: "kube-api-access-s8cr7") pod "119b3404-3f9e-4ac2-8657-7fb76d7eeb82" (UID: "119b3404-3f9e-4ac2-8657-7fb76d7eeb82"). InnerVolumeSpecName "kube-api-access-s8cr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:35:32 crc kubenswrapper[4755]: I0224 10:35:32.869738 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "119b3404-3f9e-4ac2-8657-7fb76d7eeb82" (UID: "119b3404-3f9e-4ac2-8657-7fb76d7eeb82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:35:32 crc kubenswrapper[4755]: I0224 10:35:32.890269 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:35:32 crc kubenswrapper[4755]: I0224 10:35:32.890310 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:35:32 crc kubenswrapper[4755]: I0224 10:35:32.890323 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8cr7\" (UniqueName: \"kubernetes.io/projected/119b3404-3f9e-4ac2-8657-7fb76d7eeb82-kube-api-access-s8cr7\") on node \"crc\" DevicePath \"\"" Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.296901 4755 generic.go:334] "Generic (PLEG): container finished" podID="119b3404-3f9e-4ac2-8657-7fb76d7eeb82" containerID="dfa1d246cfcc58d4c3c5c7929955cf9ef59f4cdcc7082023d03cbfdaf7aa5624" exitCode=0 Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.296938 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksxkb" Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.296952 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksxkb" event={"ID":"119b3404-3f9e-4ac2-8657-7fb76d7eeb82","Type":"ContainerDied","Data":"dfa1d246cfcc58d4c3c5c7929955cf9ef59f4cdcc7082023d03cbfdaf7aa5624"} Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.298089 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksxkb" event={"ID":"119b3404-3f9e-4ac2-8657-7fb76d7eeb82","Type":"ContainerDied","Data":"4dcc7458de86b056777b43630504bfcc07622e38c6e563f509a1c60f43c48574"} Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.298154 4755 scope.go:117] "RemoveContainer" containerID="dfa1d246cfcc58d4c3c5c7929955cf9ef59f4cdcc7082023d03cbfdaf7aa5624" Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.357326 4755 scope.go:117] "RemoveContainer" containerID="5131efb2f07062695d4d76f0326bb0bb7f98495a0d4930f3e46d853a6c217ee1" Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.362356 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ksxkb"] Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.387270 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ksxkb"] Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.400535 4755 scope.go:117] "RemoveContainer" containerID="ad0c6307d645a879fe275708429c9546f094a2117b1fc4965a5ea42d5987b454" Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.451379 4755 scope.go:117] "RemoveContainer" containerID="dfa1d246cfcc58d4c3c5c7929955cf9ef59f4cdcc7082023d03cbfdaf7aa5624" Feb 24 10:35:33 crc kubenswrapper[4755]: E0224 10:35:33.451780 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfa1d246cfcc58d4c3c5c7929955cf9ef59f4cdcc7082023d03cbfdaf7aa5624\": container with ID starting with dfa1d246cfcc58d4c3c5c7929955cf9ef59f4cdcc7082023d03cbfdaf7aa5624 not found: ID does not exist" containerID="dfa1d246cfcc58d4c3c5c7929955cf9ef59f4cdcc7082023d03cbfdaf7aa5624" Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.451834 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa1d246cfcc58d4c3c5c7929955cf9ef59f4cdcc7082023d03cbfdaf7aa5624"} err="failed to get container status \"dfa1d246cfcc58d4c3c5c7929955cf9ef59f4cdcc7082023d03cbfdaf7aa5624\": rpc error: code = NotFound desc = could not find container \"dfa1d246cfcc58d4c3c5c7929955cf9ef59f4cdcc7082023d03cbfdaf7aa5624\": container with ID starting with dfa1d246cfcc58d4c3c5c7929955cf9ef59f4cdcc7082023d03cbfdaf7aa5624 not found: ID does not exist" Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.451863 4755 scope.go:117] "RemoveContainer" containerID="5131efb2f07062695d4d76f0326bb0bb7f98495a0d4930f3e46d853a6c217ee1" Feb 24 10:35:33 crc kubenswrapper[4755]: E0224 10:35:33.452384 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5131efb2f07062695d4d76f0326bb0bb7f98495a0d4930f3e46d853a6c217ee1\": container with ID starting with 5131efb2f07062695d4d76f0326bb0bb7f98495a0d4930f3e46d853a6c217ee1 not found: ID does not exist" containerID="5131efb2f07062695d4d76f0326bb0bb7f98495a0d4930f3e46d853a6c217ee1" Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.452423 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5131efb2f07062695d4d76f0326bb0bb7f98495a0d4930f3e46d853a6c217ee1"} err="failed to get container status \"5131efb2f07062695d4d76f0326bb0bb7f98495a0d4930f3e46d853a6c217ee1\": rpc error: code = NotFound desc = could not find container \"5131efb2f07062695d4d76f0326bb0bb7f98495a0d4930f3e46d853a6c217ee1\": container with ID starting with 5131efb2f07062695d4d76f0326bb0bb7f98495a0d4930f3e46d853a6c217ee1 not found: ID does not exist" Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.452449 4755 scope.go:117] "RemoveContainer" containerID="ad0c6307d645a879fe275708429c9546f094a2117b1fc4965a5ea42d5987b454" Feb 24 10:35:33 crc kubenswrapper[4755]: E0224 10:35:33.453050 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0c6307d645a879fe275708429c9546f094a2117b1fc4965a5ea42d5987b454\": container with ID starting with ad0c6307d645a879fe275708429c9546f094a2117b1fc4965a5ea42d5987b454 not found: ID does not exist" containerID="ad0c6307d645a879fe275708429c9546f094a2117b1fc4965a5ea42d5987b454" Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.453155 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0c6307d645a879fe275708429c9546f094a2117b1fc4965a5ea42d5987b454"} err="failed to get container status \"ad0c6307d645a879fe275708429c9546f094a2117b1fc4965a5ea42d5987b454\": rpc error: code = NotFound desc = could not find container \"ad0c6307d645a879fe275708429c9546f094a2117b1fc4965a5ea42d5987b454\": container with ID starting with ad0c6307d645a879fe275708429c9546f094a2117b1fc4965a5ea42d5987b454 not found: ID does not exist" Feb 24 10:35:33 crc kubenswrapper[4755]: I0224 10:35:33.665728 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40354: no serving certificate available for the kubelet" Feb 24 10:35:34 crc kubenswrapper[4755]: I0224 10:35:34.330099 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="119b3404-3f9e-4ac2-8657-7fb76d7eeb82" path="/var/lib/kubelet/pods/119b3404-3f9e-4ac2-8657-7fb76d7eeb82/volumes" Feb 24 10:35:34 crc kubenswrapper[4755]: I0224 10:35:34.645755 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48928: no serving certificate available for the kubelet" Feb 24 10:35:36 crc kubenswrapper[4755]: I0224 10:35:36.711686 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48934: no serving certificate available for the kubelet" Feb 24 10:35:37 crc kubenswrapper[4755]: I0224 10:35:37.737544 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48940: no serving certificate available for the kubelet" Feb 24 10:35:39 crc kubenswrapper[4755]: I0224 10:35:39.756580 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48946: no serving certificate available for the kubelet" Feb 24 10:35:40 crc kubenswrapper[4755]: I0224 10:35:40.783862 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48956: no serving certificate available for the kubelet" Feb 24 10:35:42 crc kubenswrapper[4755]: I0224 10:35:42.317133 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:35:42 crc kubenswrapper[4755]: E0224 10:35:42.317925 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:35:42 crc kubenswrapper[4755]: I0224 10:35:42.807794 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48970: no serving certificate available for the kubelet" Feb 24 10:35:43 crc kubenswrapper[4755]: I0224 10:35:43.839920 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32780: no serving certificate available for the kubelet" Feb 24 10:35:45 crc kubenswrapper[4755]: I0224 10:35:45.839993 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32796: no serving certificate available for the kubelet" Feb 24 10:35:46 crc kubenswrapper[4755]: I0224 10:35:46.894741 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32798: no serving certificate available for the kubelet" Feb 24 10:35:48 crc kubenswrapper[4755]: I0224 10:35:48.880734 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32806: no serving certificate available for the kubelet" Feb 24 10:35:49 crc kubenswrapper[4755]: I0224 10:35:49.985240 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32808: no serving certificate available for the kubelet" Feb 24 10:35:51 crc kubenswrapper[4755]: I0224 10:35:51.930742 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32822: no serving certificate available for the kubelet" Feb 24 10:35:53 crc kubenswrapper[4755]: I0224 10:35:53.036803 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32826: no serving certificate available for the kubelet" Feb 24 10:35:54 crc kubenswrapper[4755]: I0224 10:35:54.316988 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:35:54 crc kubenswrapper[4755]: E0224 10:35:54.317871 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:35:54 crc kubenswrapper[4755]: I0224 10:35:54.984662 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37436: no serving certificate available for the kubelet" Feb 24 10:35:56 crc kubenswrapper[4755]: I0224 10:35:56.088154 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37450: no serving certificate available for the kubelet" Feb 24 10:35:58 crc kubenswrapper[4755]: I0224 10:35:58.043095 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37464: no serving certificate available for the kubelet" Feb 24 10:35:59 crc kubenswrapper[4755]: I0224 10:35:59.127605 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37466: no serving certificate available for the kubelet" Feb 24 10:36:01 crc kubenswrapper[4755]: I0224 10:36:01.095900 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37468: no serving certificate available for the kubelet" Feb 24 10:36:02 crc kubenswrapper[4755]: I0224 10:36:02.187377 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37470: no serving certificate available for the kubelet" Feb 24 10:36:04 crc kubenswrapper[4755]: I0224 10:36:04.144382 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43284: no serving certificate available for the kubelet" Feb 24 10:36:05 crc kubenswrapper[4755]: I0224 10:36:05.251430 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43292: no serving certificate available for the kubelet" Feb 24 10:36:07 crc kubenswrapper[4755]: I0224 10:36:07.199264 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43304: no serving certificate available for the kubelet" Feb 24 10:36:07 crc kubenswrapper[4755]: I0224 10:36:07.317113 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:36:07 crc kubenswrapper[4755]: E0224 10:36:07.317886 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:36:08 crc kubenswrapper[4755]: I0224 10:36:08.305855 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43318: no serving certificate available for the kubelet" Feb 24 10:36:10 crc kubenswrapper[4755]: I0224 10:36:10.264417 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43334: no serving certificate available for the kubelet" Feb 24 10:36:11 crc kubenswrapper[4755]: I0224 10:36:11.360940 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43350: no serving certificate available for the kubelet" Feb 24 10:36:13 crc kubenswrapper[4755]: I0224 10:36:13.329509 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43352: no serving certificate available for the kubelet" Feb 24 10:36:14 crc kubenswrapper[4755]: I0224 10:36:14.415475 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52964: no serving certificate available for the kubelet" Feb 24 10:36:16 crc kubenswrapper[4755]: I0224 10:36:16.383683 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52968: no serving certificate available for the kubelet" Feb 24 10:36:17 crc kubenswrapper[4755]: I0224 10:36:17.467851 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52984: no serving certificate available for the kubelet" Feb 24 10:36:19 crc kubenswrapper[4755]: I0224 10:36:19.449955 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52992: no serving certificate available for the kubelet" Feb 24 10:36:20 crc kubenswrapper[4755]: I0224 10:36:20.514954 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53002: no serving certificate available for the kubelet" Feb 24 10:36:21 crc kubenswrapper[4755]: I0224 10:36:21.316863 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:36:21 crc kubenswrapper[4755]: E0224 10:36:21.317315 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:36:22 crc kubenswrapper[4755]: I0224 10:36:22.498536 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53012: no serving certificate available for the kubelet" Feb 24 10:36:23 crc kubenswrapper[4755]: I0224 10:36:23.573110 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53018: no serving certificate available for the kubelet" Feb 24 10:36:25 crc kubenswrapper[4755]: I0224 10:36:25.552339 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47990: no serving certificate available for the kubelet" Feb 24 10:36:26 crc kubenswrapper[4755]: I0224 10:36:26.630579 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48002: no serving certificate available for the kubelet" Feb 24 10:36:28 crc kubenswrapper[4755]: I0224 10:36:28.599768 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48010: no serving certificate available for the kubelet" Feb 24 10:36:29 crc kubenswrapper[4755]: I0224 10:36:29.677504 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48016: no serving certificate available for the kubelet" Feb 24 10:36:31 crc kubenswrapper[4755]: I0224 10:36:31.651865 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48030: no serving certificate available for the kubelet" Feb 24 10:36:32 crc kubenswrapper[4755]: I0224 10:36:32.713550 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48040: no serving certificate available for the kubelet" Feb 24 10:36:34 crc kubenswrapper[4755]: I0224 10:36:34.695365 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45870: no serving certificate available for the kubelet" Feb 24 10:36:35 crc kubenswrapper[4755]: I0224 10:36:35.757644 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45872: no serving certificate available for the kubelet" Feb 24 10:36:36 crc kubenswrapper[4755]: I0224 10:36:36.332225 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:36:36 crc kubenswrapper[4755]: E0224 10:36:36.333576 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:36:37 crc kubenswrapper[4755]: I0224 10:36:37.750296 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45876: no serving certificate available for the kubelet" Feb 24 10:36:38 crc kubenswrapper[4755]: I0224 10:36:38.808507 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45880: no serving certificate available for the kubelet" Feb 24 10:36:40 crc kubenswrapper[4755]: I0224 10:36:40.807006 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45888: no serving certificate available for the kubelet" Feb 24 10:36:41 crc kubenswrapper[4755]: I0224 10:36:41.859265 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45890: no serving certificate available for the kubelet" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.533916 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ndxqv"] Feb 24 10:36:43 crc kubenswrapper[4755]: E0224 10:36:43.534476 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="119b3404-3f9e-4ac2-8657-7fb76d7eeb82" containerName="registry-server" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.534498 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="119b3404-3f9e-4ac2-8657-7fb76d7eeb82" containerName="registry-server" Feb 24 10:36:43 crc kubenswrapper[4755]: E0224 10:36:43.534544 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="119b3404-3f9e-4ac2-8657-7fb76d7eeb82" containerName="extract-utilities" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.534558 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="119b3404-3f9e-4ac2-8657-7fb76d7eeb82" containerName="extract-utilities" Feb 24 10:36:43 crc kubenswrapper[4755]: E0224 10:36:43.534591 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="119b3404-3f9e-4ac2-8657-7fb76d7eeb82" containerName="extract-content" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.534606 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="119b3404-3f9e-4ac2-8657-7fb76d7eeb82" containerName="extract-content" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.534933 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="119b3404-3f9e-4ac2-8657-7fb76d7eeb82" containerName="registry-server" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.536973 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.563138 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndxqv"] Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.600431 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b27637e-62d5-4bac-a200-959e6c15c428-catalog-content\") pod \"redhat-marketplace-ndxqv\" (UID: \"5b27637e-62d5-4bac-a200-959e6c15c428\") " pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.600486 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b27637e-62d5-4bac-a200-959e6c15c428-utilities\") pod \"redhat-marketplace-ndxqv\" (UID: \"5b27637e-62d5-4bac-a200-959e6c15c428\") " pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.600561 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62fqr\" (UniqueName: \"kubernetes.io/projected/5b27637e-62d5-4bac-a200-959e6c15c428-kube-api-access-62fqr\") pod \"redhat-marketplace-ndxqv\" (UID: \"5b27637e-62d5-4bac-a200-959e6c15c428\") " pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.701564 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b27637e-62d5-4bac-a200-959e6c15c428-catalog-content\") pod \"redhat-marketplace-ndxqv\" (UID: \"5b27637e-62d5-4bac-a200-959e6c15c428\") " pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.701606 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b27637e-62d5-4bac-a200-959e6c15c428-utilities\") pod \"redhat-marketplace-ndxqv\" (UID: \"5b27637e-62d5-4bac-a200-959e6c15c428\") " pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.701643 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62fqr\" (UniqueName: \"kubernetes.io/projected/5b27637e-62d5-4bac-a200-959e6c15c428-kube-api-access-62fqr\") pod \"redhat-marketplace-ndxqv\" (UID: \"5b27637e-62d5-4bac-a200-959e6c15c428\") " pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.704200 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b27637e-62d5-4bac-a200-959e6c15c428-catalog-content\") pod \"redhat-marketplace-ndxqv\" (UID: \"5b27637e-62d5-4bac-a200-959e6c15c428\") " pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.704228 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b27637e-62d5-4bac-a200-959e6c15c428-utilities\") pod \"redhat-marketplace-ndxqv\" (UID: \"5b27637e-62d5-4bac-a200-959e6c15c428\") " pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.722879 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62fqr\" (UniqueName: \"kubernetes.io/projected/5b27637e-62d5-4bac-a200-959e6c15c428-kube-api-access-62fqr\") pod \"redhat-marketplace-ndxqv\" (UID: \"5b27637e-62d5-4bac-a200-959e6c15c428\") " pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.855343 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59332: no serving certificate available for the kubelet" Feb 24 10:36:43 crc kubenswrapper[4755]: I0224 10:36:43.869765 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:44 crc kubenswrapper[4755]: I0224 10:36:44.376285 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndxqv"] Feb 24 10:36:44 crc kubenswrapper[4755]: I0224 10:36:44.906187 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59334: no serving certificate available for the kubelet" Feb 24 10:36:44 crc kubenswrapper[4755]: I0224 10:36:44.982232 4755 generic.go:334] "Generic (PLEG): container finished" podID="5b27637e-62d5-4bac-a200-959e6c15c428" containerID="6d2441ec08b61e3eb2553357fabcc30c782f57e4d945142450166bc452bfa50d" exitCode=0 Feb 24 10:36:44 crc kubenswrapper[4755]: I0224 10:36:44.982325 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndxqv" event={"ID":"5b27637e-62d5-4bac-a200-959e6c15c428","Type":"ContainerDied","Data":"6d2441ec08b61e3eb2553357fabcc30c782f57e4d945142450166bc452bfa50d"} Feb 24 10:36:44 crc kubenswrapper[4755]: I0224 10:36:44.982617 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndxqv" event={"ID":"5b27637e-62d5-4bac-a200-959e6c15c428","Type":"ContainerStarted","Data":"d704dd957fc060265031ed8efbb354c0ccdd3ee09d3486a0e6d1f2e8ef71c09e"} Feb 24 10:36:45 crc kubenswrapper[4755]: I0224 10:36:45.993918 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndxqv" event={"ID":"5b27637e-62d5-4bac-a200-959e6c15c428","Type":"ContainerStarted","Data":"6de52b454946acf747817a43ee63c6838b337fab3b0d0f0d5fad8cf45b498981"} Feb 24 10:36:46 crc kubenswrapper[4755]: I0224 10:36:46.912242 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59340: no serving certificate available for the kubelet" Feb 24 10:36:47 crc kubenswrapper[4755]: I0224 10:36:47.004941 4755 generic.go:334] "Generic (PLEG): container finished" podID="5b27637e-62d5-4bac-a200-959e6c15c428" containerID="6de52b454946acf747817a43ee63c6838b337fab3b0d0f0d5fad8cf45b498981" exitCode=0 Feb 24 10:36:47 crc kubenswrapper[4755]: I0224 10:36:47.005022 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndxqv" event={"ID":"5b27637e-62d5-4bac-a200-959e6c15c428","Type":"ContainerDied","Data":"6de52b454946acf747817a43ee63c6838b337fab3b0d0f0d5fad8cf45b498981"} Feb 24 10:36:47 crc kubenswrapper[4755]: I0224 10:36:47.965249 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59344: no serving certificate available for the kubelet" Feb 24 10:36:48 crc kubenswrapper[4755]: I0224 10:36:48.020828 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndxqv" event={"ID":"5b27637e-62d5-4bac-a200-959e6c15c428","Type":"ContainerStarted","Data":"062ed46507d3c0466f5d588085a107f6e044bdb18cfdf01e08b7f1c73aa1907a"} Feb 24 10:36:48 crc kubenswrapper[4755]: I0224 10:36:48.051440 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ndxqv" podStartSLOduration=2.608210938 podStartE2EDuration="5.051412096s" podCreationTimestamp="2026-02-24 10:36:43 +0000 UTC" firstStartedPulling="2026-02-24 10:36:44.984512455 +0000 UTC m=+2509.441034998" lastFinishedPulling="2026-02-24 10:36:47.427713603 +0000 UTC m=+2511.884236156" observedRunningTime="2026-02-24 10:36:48.049570249 +0000 UTC m=+2512.506092802" watchObservedRunningTime="2026-02-24 10:36:48.051412096 +0000 UTC m=+2512.507934669" Feb 24 10:36:49 crc kubenswrapper[4755]: I0224 10:36:49.959328 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59354: no serving certificate available for the kubelet" Feb 24 10:36:51 crc kubenswrapper[4755]: I0224 10:36:51.023446 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59370: no serving certificate available for the kubelet" Feb 24 10:36:51 crc kubenswrapper[4755]: I0224 10:36:51.316882 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:36:51 crc kubenswrapper[4755]: E0224 10:36:51.317420 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:36:53 crc kubenswrapper[4755]: I0224 10:36:53.087536 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59372: no serving certificate available for the kubelet" Feb 24 10:36:53 crc kubenswrapper[4755]: I0224 10:36:53.870264 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:53 crc kubenswrapper[4755]: I0224 10:36:53.870361 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:53 crc kubenswrapper[4755]: I0224 10:36:53.948931 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:54 crc kubenswrapper[4755]: I0224 10:36:54.076199 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34512: no serving certificate available for the kubelet" Feb 24 10:36:54 crc kubenswrapper[4755]: I0224 10:36:54.129590 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:54 crc kubenswrapper[4755]: I0224 10:36:54.197545 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndxqv"] Feb 24 10:36:56 crc kubenswrapper[4755]: I0224 10:36:56.098552 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ndxqv" podUID="5b27637e-62d5-4bac-a200-959e6c15c428" containerName="registry-server" containerID="cri-o://062ed46507d3c0466f5d588085a107f6e044bdb18cfdf01e08b7f1c73aa1907a" gracePeriod=2 Feb 24 10:36:56 crc kubenswrapper[4755]: I0224 10:36:56.147729 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34520: no serving certificate available for the kubelet" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.093848 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.124434 4755 generic.go:334] "Generic (PLEG): container finished" podID="5b27637e-62d5-4bac-a200-959e6c15c428" containerID="062ed46507d3c0466f5d588085a107f6e044bdb18cfdf01e08b7f1c73aa1907a" exitCode=0 Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.124478 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndxqv" event={"ID":"5b27637e-62d5-4bac-a200-959e6c15c428","Type":"ContainerDied","Data":"062ed46507d3c0466f5d588085a107f6e044bdb18cfdf01e08b7f1c73aa1907a"} Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.124521 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndxqv" event={"ID":"5b27637e-62d5-4bac-a200-959e6c15c428","Type":"ContainerDied","Data":"d704dd957fc060265031ed8efbb354c0ccdd3ee09d3486a0e6d1f2e8ef71c09e"} Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.124538 4755 scope.go:117] "RemoveContainer" containerID="062ed46507d3c0466f5d588085a107f6e044bdb18cfdf01e08b7f1c73aa1907a" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.124716 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ndxqv" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.138213 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34530: no serving certificate available for the kubelet" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.158242 4755 scope.go:117] "RemoveContainer" containerID="6de52b454946acf747817a43ee63c6838b337fab3b0d0f0d5fad8cf45b498981" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.178061 4755 scope.go:117] "RemoveContainer" containerID="6d2441ec08b61e3eb2553357fabcc30c782f57e4d945142450166bc452bfa50d" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.213054 4755 scope.go:117] "RemoveContainer" containerID="062ed46507d3c0466f5d588085a107f6e044bdb18cfdf01e08b7f1c73aa1907a" Feb 24 10:36:57 crc kubenswrapper[4755]: E0224 10:36:57.213428 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"062ed46507d3c0466f5d588085a107f6e044bdb18cfdf01e08b7f1c73aa1907a\": container with ID starting with 062ed46507d3c0466f5d588085a107f6e044bdb18cfdf01e08b7f1c73aa1907a not found: ID does not exist" containerID="062ed46507d3c0466f5d588085a107f6e044bdb18cfdf01e08b7f1c73aa1907a" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.213458 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"062ed46507d3c0466f5d588085a107f6e044bdb18cfdf01e08b7f1c73aa1907a"} err="failed to get container status \"062ed46507d3c0466f5d588085a107f6e044bdb18cfdf01e08b7f1c73aa1907a\": rpc error: code = NotFound desc = could not find container \"062ed46507d3c0466f5d588085a107f6e044bdb18cfdf01e08b7f1c73aa1907a\": container with ID starting with 062ed46507d3c0466f5d588085a107f6e044bdb18cfdf01e08b7f1c73aa1907a not found: ID does not exist" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.213479 4755 scope.go:117] "RemoveContainer" containerID="6de52b454946acf747817a43ee63c6838b337fab3b0d0f0d5fad8cf45b498981" Feb 24 10:36:57 crc kubenswrapper[4755]: E0224 10:36:57.213755 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de52b454946acf747817a43ee63c6838b337fab3b0d0f0d5fad8cf45b498981\": container with ID starting with 6de52b454946acf747817a43ee63c6838b337fab3b0d0f0d5fad8cf45b498981 not found: ID does not exist" containerID="6de52b454946acf747817a43ee63c6838b337fab3b0d0f0d5fad8cf45b498981" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.213803 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de52b454946acf747817a43ee63c6838b337fab3b0d0f0d5fad8cf45b498981"} err="failed to get container status \"6de52b454946acf747817a43ee63c6838b337fab3b0d0f0d5fad8cf45b498981\": rpc error: code = NotFound desc = could not find container \"6de52b454946acf747817a43ee63c6838b337fab3b0d0f0d5fad8cf45b498981\": container with ID starting with 6de52b454946acf747817a43ee63c6838b337fab3b0d0f0d5fad8cf45b498981 not found: ID does not exist" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.213840 4755 scope.go:117] "RemoveContainer" containerID="6d2441ec08b61e3eb2553357fabcc30c782f57e4d945142450166bc452bfa50d" Feb 24 10:36:57 crc kubenswrapper[4755]: E0224 10:36:57.214108 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d2441ec08b61e3eb2553357fabcc30c782f57e4d945142450166bc452bfa50d\": container with ID starting with 6d2441ec08b61e3eb2553357fabcc30c782f57e4d945142450166bc452bfa50d not found: ID does not exist" containerID="6d2441ec08b61e3eb2553357fabcc30c782f57e4d945142450166bc452bfa50d" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.214134 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d2441ec08b61e3eb2553357fabcc30c782f57e4d945142450166bc452bfa50d"} err="failed to get container status \"6d2441ec08b61e3eb2553357fabcc30c782f57e4d945142450166bc452bfa50d\": rpc error: code = NotFound desc = could not find container \"6d2441ec08b61e3eb2553357fabcc30c782f57e4d945142450166bc452bfa50d\": container with ID starting with 6d2441ec08b61e3eb2553357fabcc30c782f57e4d945142450166bc452bfa50d not found: ID does not exist" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.248885 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b27637e-62d5-4bac-a200-959e6c15c428-catalog-content\") pod \"5b27637e-62d5-4bac-a200-959e6c15c428\" (UID: \"5b27637e-62d5-4bac-a200-959e6c15c428\") " Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.248994 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b27637e-62d5-4bac-a200-959e6c15c428-utilities\") pod \"5b27637e-62d5-4bac-a200-959e6c15c428\" (UID: \"5b27637e-62d5-4bac-a200-959e6c15c428\") " Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.249066 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62fqr\" (UniqueName: \"kubernetes.io/projected/5b27637e-62d5-4bac-a200-959e6c15c428-kube-api-access-62fqr\") pod \"5b27637e-62d5-4bac-a200-959e6c15c428\" (UID: \"5b27637e-62d5-4bac-a200-959e6c15c428\") " Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.250220 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b27637e-62d5-4bac-a200-959e6c15c428-utilities" (OuterVolumeSpecName: "utilities") pod "5b27637e-62d5-4bac-a200-959e6c15c428" (UID: "5b27637e-62d5-4bac-a200-959e6c15c428"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.263687 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b27637e-62d5-4bac-a200-959e6c15c428-kube-api-access-62fqr" (OuterVolumeSpecName: "kube-api-access-62fqr") pod "5b27637e-62d5-4bac-a200-959e6c15c428" (UID: "5b27637e-62d5-4bac-a200-959e6c15c428"). InnerVolumeSpecName "kube-api-access-62fqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.314407 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b27637e-62d5-4bac-a200-959e6c15c428-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b27637e-62d5-4bac-a200-959e6c15c428" (UID: "5b27637e-62d5-4bac-a200-959e6c15c428"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.351729 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b27637e-62d5-4bac-a200-959e6c15c428-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.351791 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b27637e-62d5-4bac-a200-959e6c15c428-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.351809 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62fqr\" (UniqueName: \"kubernetes.io/projected/5b27637e-62d5-4bac-a200-959e6c15c428-kube-api-access-62fqr\") on node \"crc\" DevicePath \"\"" Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.462679 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndxqv"] Feb 24 10:36:57 crc kubenswrapper[4755]: I0224 10:36:57.470798 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndxqv"] Feb 24 10:36:58 crc kubenswrapper[4755]: I0224 10:36:58.339340 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b27637e-62d5-4bac-a200-959e6c15c428" path="/var/lib/kubelet/pods/5b27637e-62d5-4bac-a200-959e6c15c428/volumes" Feb 24 10:36:59 crc kubenswrapper[4755]: I0224 10:36:59.209154 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34540: no serving certificate available for the kubelet" Feb 24 10:37:00 crc kubenswrapper[4755]: I0224 10:37:00.195454 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34556: no serving certificate available for the kubelet" Feb 24 10:37:02 crc kubenswrapper[4755]: I0224 10:37:02.270711 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34564: no serving certificate available for the kubelet" Feb 24 10:37:03 crc kubenswrapper[4755]: I0224 10:37:03.244895 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34568: no serving certificate available for the kubelet" Feb 24 10:37:03 crc kubenswrapper[4755]: I0224 10:37:03.316902 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:37:03 crc kubenswrapper[4755]: E0224 10:37:03.317372 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:37:05 crc kubenswrapper[4755]: I0224 10:37:05.331268 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51168: no serving certificate available for the kubelet" Feb 24 10:37:06 crc kubenswrapper[4755]: I0224 10:37:06.296524 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51174: no serving certificate available for the kubelet" Feb 24 10:37:08 crc kubenswrapper[4755]: I0224 10:37:08.387925 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51190: no serving certificate available for the kubelet" Feb 24 10:37:09 crc kubenswrapper[4755]: I0224 10:37:09.343859 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51196: no serving certificate available for the kubelet" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.143143 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xhsvj"] Feb 24 10:37:10 crc kubenswrapper[4755]: E0224 10:37:10.144189 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b27637e-62d5-4bac-a200-959e6c15c428" containerName="extract-utilities" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.144283 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b27637e-62d5-4bac-a200-959e6c15c428" containerName="extract-utilities" Feb 24 10:37:10 crc kubenswrapper[4755]: E0224 10:37:10.144352 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b27637e-62d5-4bac-a200-959e6c15c428" containerName="registry-server" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.144407 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b27637e-62d5-4bac-a200-959e6c15c428" containerName="registry-server" Feb 24 10:37:10 crc kubenswrapper[4755]: E0224 10:37:10.144471 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b27637e-62d5-4bac-a200-959e6c15c428" containerName="extract-content" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.144518 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b27637e-62d5-4bac-a200-959e6c15c428" containerName="extract-content" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.144711 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b27637e-62d5-4bac-a200-959e6c15c428" containerName="registry-server" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.145871 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.150477 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xhsvj"] Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.315688 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-utilities\") pod \"certified-operators-xhsvj\" (UID: \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\") " pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.315819 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-catalog-content\") pod \"certified-operators-xhsvj\" (UID: \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\") " pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.316037 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jjp7\" (UniqueName: \"kubernetes.io/projected/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-kube-api-access-7jjp7\") pod \"certified-operators-xhsvj\" (UID: \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\") " pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.417946 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jjp7\" (UniqueName: \"kubernetes.io/projected/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-kube-api-access-7jjp7\") pod \"certified-operators-xhsvj\" (UID: \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\") " pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.418014 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-utilities\") pod \"certified-operators-xhsvj\" (UID: \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\") " pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.418229 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-catalog-content\") pod \"certified-operators-xhsvj\" (UID: \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\") " pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.418540 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-utilities\") pod \"certified-operators-xhsvj\" (UID: \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\") " pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.418682 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-catalog-content\") pod \"certified-operators-xhsvj\" (UID: \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\") " pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.454057 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jjp7\" (UniqueName: \"kubernetes.io/projected/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-kube-api-access-7jjp7\") pod \"certified-operators-xhsvj\" (UID: \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\") " pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.467522 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:10 crc kubenswrapper[4755]: I0224 10:37:10.929941 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xhsvj"] Feb 24 10:37:11 crc kubenswrapper[4755]: I0224 10:37:11.278817 4755 generic.go:334] "Generic (PLEG): container finished" podID="c0150d1b-bae0-4fb1-bed4-aaf87ebec628" containerID="a684e322c2dca9ce08c66083017f68b2e49571fe30498fd9ccebf4bf601f4661" exitCode=0 Feb 24 10:37:11 crc kubenswrapper[4755]: I0224 10:37:11.278866 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhsvj" event={"ID":"c0150d1b-bae0-4fb1-bed4-aaf87ebec628","Type":"ContainerDied","Data":"a684e322c2dca9ce08c66083017f68b2e49571fe30498fd9ccebf4bf601f4661"} Feb 24 10:37:11 crc kubenswrapper[4755]: I0224 10:37:11.278897 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhsvj" event={"ID":"c0150d1b-bae0-4fb1-bed4-aaf87ebec628","Type":"ContainerStarted","Data":"8cee6c87cdf9fefd7c9aa3aacd6b87dd1878f2e87373d05c84d9cf3db322b821"} Feb 24 10:37:11 crc kubenswrapper[4755]: I0224 10:37:11.448096 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51210: no serving certificate available for the kubelet" Feb 24 10:37:12 crc kubenswrapper[4755]: I0224 10:37:12.291690 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhsvj" event={"ID":"c0150d1b-bae0-4fb1-bed4-aaf87ebec628","Type":"ContainerStarted","Data":"baeebe43352c960875d1b12c14a901ca6b62bd2d9ca78d92f2607a175a5b0afa"} Feb 24 10:37:12 crc kubenswrapper[4755]: I0224 10:37:12.401686 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51224: no serving certificate available for the kubelet" Feb 24 10:37:13 crc kubenswrapper[4755]: I0224 10:37:13.303976 4755 generic.go:334] "Generic (PLEG): container finished" podID="c0150d1b-bae0-4fb1-bed4-aaf87ebec628" containerID="baeebe43352c960875d1b12c14a901ca6b62bd2d9ca78d92f2607a175a5b0afa" exitCode=0 Feb 24 10:37:13 crc kubenswrapper[4755]: I0224 10:37:13.304038 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhsvj" event={"ID":"c0150d1b-bae0-4fb1-bed4-aaf87ebec628","Type":"ContainerDied","Data":"baeebe43352c960875d1b12c14a901ca6b62bd2d9ca78d92f2607a175a5b0afa"} Feb 24 10:37:14 crc kubenswrapper[4755]: I0224 10:37:14.333588 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhsvj" event={"ID":"c0150d1b-bae0-4fb1-bed4-aaf87ebec628","Type":"ContainerStarted","Data":"8bcfa0732e5b1c81e6fd87d021c4ccb6eb86ea35638b79db0c8f950105f2acec"} Feb 24 10:37:14 crc kubenswrapper[4755]: I0224 10:37:14.343580 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xhsvj" podStartSLOduration=1.837094657 podStartE2EDuration="4.343560584s" podCreationTimestamp="2026-02-24 10:37:10 +0000 UTC" firstStartedPulling="2026-02-24 10:37:11.2827485 +0000 UTC m=+2535.739271053" lastFinishedPulling="2026-02-24 10:37:13.789214387 +0000 UTC m=+2538.245736980" observedRunningTime="2026-02-24 10:37:14.339845379 +0000 UTC m=+2538.796367982" watchObservedRunningTime="2026-02-24 10:37:14.343560584 +0000 UTC m=+2538.800083157" Feb 24 10:37:14 crc kubenswrapper[4755]: I0224 10:37:14.492366 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54604: no serving certificate available for the kubelet" Feb 24 10:37:15 crc kubenswrapper[4755]: I0224 10:37:15.437158 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54612: no serving certificate available for the kubelet" Feb 24 10:37:16 crc kubenswrapper[4755]: I0224 10:37:16.323580 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:37:16 crc kubenswrapper[4755]: E0224 10:37:16.324005 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:37:17 crc kubenswrapper[4755]: I0224 10:37:17.538014 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54626: no serving certificate available for the kubelet" Feb 24 10:37:18 crc kubenswrapper[4755]: I0224 10:37:18.494960 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54634: no serving certificate available for the kubelet" Feb 24 10:37:20 crc kubenswrapper[4755]: I0224 10:37:20.468704 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:20 crc kubenswrapper[4755]: I0224 10:37:20.470243 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:20 crc kubenswrapper[4755]: I0224 10:37:20.538256 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:20 crc kubenswrapper[4755]: I0224 10:37:20.582647 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54638: no serving certificate available for the kubelet" Feb 24 10:37:21 crc kubenswrapper[4755]: I0224 10:37:21.459867 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:21 crc kubenswrapper[4755]: I0224 10:37:21.541658 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xhsvj"] Feb 24 10:37:21 crc kubenswrapper[4755]: I0224 10:37:21.607945 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54646: no serving certificate available for the kubelet" Feb 24 10:37:23 crc kubenswrapper[4755]: I0224 10:37:23.405699 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xhsvj" podUID="c0150d1b-bae0-4fb1-bed4-aaf87ebec628" containerName="registry-server" containerID="cri-o://8bcfa0732e5b1c81e6fd87d021c4ccb6eb86ea35638b79db0c8f950105f2acec" gracePeriod=2 Feb 24 10:37:23 crc kubenswrapper[4755]: I0224 10:37:23.641860 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54658: no serving certificate available for the kubelet" Feb 24 10:37:23 crc kubenswrapper[4755]: I0224 10:37:23.969127 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.076521 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjp7\" (UniqueName: \"kubernetes.io/projected/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-kube-api-access-7jjp7\") pod \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\" (UID: \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\") " Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.076636 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-catalog-content\") pod \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\" (UID: \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\") " Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.076679 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-utilities\") pod \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\" (UID: \"c0150d1b-bae0-4fb1-bed4-aaf87ebec628\") " Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.078459 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-utilities" (OuterVolumeSpecName: "utilities") pod "c0150d1b-bae0-4fb1-bed4-aaf87ebec628" (UID: "c0150d1b-bae0-4fb1-bed4-aaf87ebec628"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.083675 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-kube-api-access-7jjp7" (OuterVolumeSpecName: "kube-api-access-7jjp7") pod "c0150d1b-bae0-4fb1-bed4-aaf87ebec628" (UID: "c0150d1b-bae0-4fb1-bed4-aaf87ebec628"). InnerVolumeSpecName "kube-api-access-7jjp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.134340 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0150d1b-bae0-4fb1-bed4-aaf87ebec628" (UID: "c0150d1b-bae0-4fb1-bed4-aaf87ebec628"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.178161 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jjp7\" (UniqueName: \"kubernetes.io/projected/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-kube-api-access-7jjp7\") on node \"crc\" DevicePath \"\"" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.178194 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.178207 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0150d1b-bae0-4fb1-bed4-aaf87ebec628-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.425759 4755 generic.go:334] "Generic (PLEG): container finished" podID="c0150d1b-bae0-4fb1-bed4-aaf87ebec628" containerID="8bcfa0732e5b1c81e6fd87d021c4ccb6eb86ea35638b79db0c8f950105f2acec" exitCode=0 Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.425834 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhsvj" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.425837 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhsvj" event={"ID":"c0150d1b-bae0-4fb1-bed4-aaf87ebec628","Type":"ContainerDied","Data":"8bcfa0732e5b1c81e6fd87d021c4ccb6eb86ea35638b79db0c8f950105f2acec"} Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.426014 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhsvj" event={"ID":"c0150d1b-bae0-4fb1-bed4-aaf87ebec628","Type":"ContainerDied","Data":"8cee6c87cdf9fefd7c9aa3aacd6b87dd1878f2e87373d05c84d9cf3db322b821"} Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.426040 4755 scope.go:117] "RemoveContainer" containerID="8bcfa0732e5b1c81e6fd87d021c4ccb6eb86ea35638b79db0c8f950105f2acec" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.467464 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xhsvj"] Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.468286 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xhsvj"] Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.471859 4755 scope.go:117] "RemoveContainer" containerID="baeebe43352c960875d1b12c14a901ca6b62bd2d9ca78d92f2607a175a5b0afa" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.497043 4755 scope.go:117] "RemoveContainer" containerID="a684e322c2dca9ce08c66083017f68b2e49571fe30498fd9ccebf4bf601f4661" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.561156 4755 scope.go:117] "RemoveContainer" containerID="8bcfa0732e5b1c81e6fd87d021c4ccb6eb86ea35638b79db0c8f950105f2acec" Feb 24 10:37:24 crc kubenswrapper[4755]: E0224 10:37:24.561770 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bcfa0732e5b1c81e6fd87d021c4ccb6eb86ea35638b79db0c8f950105f2acec\": container with ID starting with 8bcfa0732e5b1c81e6fd87d021c4ccb6eb86ea35638b79db0c8f950105f2acec not found: ID does not exist" containerID="8bcfa0732e5b1c81e6fd87d021c4ccb6eb86ea35638b79db0c8f950105f2acec" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.561808 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bcfa0732e5b1c81e6fd87d021c4ccb6eb86ea35638b79db0c8f950105f2acec"} err="failed to get container status \"8bcfa0732e5b1c81e6fd87d021c4ccb6eb86ea35638b79db0c8f950105f2acec\": rpc error: code = NotFound desc = could not find container \"8bcfa0732e5b1c81e6fd87d021c4ccb6eb86ea35638b79db0c8f950105f2acec\": container with ID starting with 8bcfa0732e5b1c81e6fd87d021c4ccb6eb86ea35638b79db0c8f950105f2acec not found: ID does not exist" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.561833 4755 scope.go:117] "RemoveContainer" containerID="baeebe43352c960875d1b12c14a901ca6b62bd2d9ca78d92f2607a175a5b0afa" Feb 24 10:37:24 crc kubenswrapper[4755]: E0224 10:37:24.562159 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baeebe43352c960875d1b12c14a901ca6b62bd2d9ca78d92f2607a175a5b0afa\": container with ID starting with baeebe43352c960875d1b12c14a901ca6b62bd2d9ca78d92f2607a175a5b0afa not found: ID does not exist" containerID="baeebe43352c960875d1b12c14a901ca6b62bd2d9ca78d92f2607a175a5b0afa" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.562199 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baeebe43352c960875d1b12c14a901ca6b62bd2d9ca78d92f2607a175a5b0afa"} err="failed to get container status \"baeebe43352c960875d1b12c14a901ca6b62bd2d9ca78d92f2607a175a5b0afa\": rpc error: code = NotFound desc = could not find container \"baeebe43352c960875d1b12c14a901ca6b62bd2d9ca78d92f2607a175a5b0afa\": container with ID starting with baeebe43352c960875d1b12c14a901ca6b62bd2d9ca78d92f2607a175a5b0afa not found: ID does not exist" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.562226 4755 scope.go:117] "RemoveContainer" containerID="a684e322c2dca9ce08c66083017f68b2e49571fe30498fd9ccebf4bf601f4661" Feb 24 10:37:24 crc kubenswrapper[4755]: E0224 10:37:24.562507 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a684e322c2dca9ce08c66083017f68b2e49571fe30498fd9ccebf4bf601f4661\": container with ID starting with a684e322c2dca9ce08c66083017f68b2e49571fe30498fd9ccebf4bf601f4661 not found: ID does not exist" containerID="a684e322c2dca9ce08c66083017f68b2e49571fe30498fd9ccebf4bf601f4661" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.562533 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a684e322c2dca9ce08c66083017f68b2e49571fe30498fd9ccebf4bf601f4661"} err="failed to get container status \"a684e322c2dca9ce08c66083017f68b2e49571fe30498fd9ccebf4bf601f4661\": rpc error: code = NotFound desc = could not find container \"a684e322c2dca9ce08c66083017f68b2e49571fe30498fd9ccebf4bf601f4661\": container with ID starting with a684e322c2dca9ce08c66083017f68b2e49571fe30498fd9ccebf4bf601f4661 not found: ID does not exist" Feb 24 10:37:24 crc kubenswrapper[4755]: I0224 10:37:24.643200 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41856: no serving certificate available for the kubelet" Feb 24 10:37:26 crc kubenswrapper[4755]: I0224 10:37:26.335036 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0150d1b-bae0-4fb1-bed4-aaf87ebec628" path="/var/lib/kubelet/pods/c0150d1b-bae0-4fb1-bed4-aaf87ebec628/volumes" Feb 24 10:37:26 crc kubenswrapper[4755]: I0224 10:37:26.699844 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41866: no serving certificate available for the kubelet" Feb 24 10:37:27 crc kubenswrapper[4755]: I0224 10:37:27.687318 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41880: no serving certificate available for the kubelet" Feb 24 10:37:28 crc kubenswrapper[4755]: I0224 10:37:28.317834 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:37:28 crc kubenswrapper[4755]: E0224 10:37:28.318298 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:37:29 crc kubenswrapper[4755]: I0224 10:37:29.756237 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41896: no serving certificate available for the kubelet" Feb 24 10:37:30 crc kubenswrapper[4755]: I0224 10:37:30.739120 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41900: no serving certificate available for the kubelet" Feb 24 10:37:32 crc kubenswrapper[4755]: I0224 10:37:32.820796 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41902: no serving certificate available for the kubelet" Feb 24 10:37:33 crc kubenswrapper[4755]: I0224 10:37:33.793816 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37800: no serving certificate available for the kubelet" Feb 24 10:37:35 crc kubenswrapper[4755]: I0224 10:37:35.879649 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37802: no serving certificate available for the kubelet" Feb 24 10:37:36 crc kubenswrapper[4755]: I0224 10:37:36.852720 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37816: no serving certificate available for the kubelet" Feb 24 10:37:38 crc kubenswrapper[4755]: I0224 10:37:38.919475 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37818: no serving certificate available for the kubelet" Feb 24 10:37:39 crc kubenswrapper[4755]: I0224 10:37:39.893115 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37834: no serving certificate available for the kubelet" Feb 24 10:37:40 crc kubenswrapper[4755]: I0224 10:37:40.316594 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:37:40 crc kubenswrapper[4755]: E0224 10:37:40.317242 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:37:41 crc kubenswrapper[4755]: I0224 10:37:41.986152 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37842: no serving certificate available for the kubelet" Feb 24 10:37:42 crc kubenswrapper[4755]: I0224 10:37:42.943476 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37844: no serving certificate available for the kubelet" Feb 24 10:37:45 crc kubenswrapper[4755]: I0224 10:37:45.043880 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37344: no serving certificate available for the kubelet" Feb 24 10:37:46 crc kubenswrapper[4755]: I0224 10:37:46.003191 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37346: no serving certificate available for the kubelet" Feb 24 10:37:48 crc kubenswrapper[4755]: I0224 10:37:48.091361 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37348: no serving certificate available for the kubelet" Feb 24 10:37:49 crc kubenswrapper[4755]: I0224 10:37:49.047178 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37354: no serving certificate available for the kubelet" Feb 24 10:37:51 crc kubenswrapper[4755]: I0224 10:37:51.143345 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37358: no serving certificate available for the kubelet" Feb 24 10:37:52 crc kubenswrapper[4755]: I0224 10:37:52.113865 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37368: no serving certificate available for the kubelet" Feb 24 10:37:54 crc kubenswrapper[4755]: I0224 10:37:54.202801 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48894: no serving certificate available for the kubelet" Feb 24 10:37:54 crc kubenswrapper[4755]: I0224 10:37:54.317253 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:37:54 crc kubenswrapper[4755]: E0224 10:37:54.317701 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:37:55 crc kubenswrapper[4755]: I0224 10:37:55.171944 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48900: no serving certificate available for the kubelet" Feb 24 10:37:57 crc kubenswrapper[4755]: I0224 10:37:57.260473 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48910: no serving certificate available for the kubelet" Feb 24 10:37:58 crc kubenswrapper[4755]: I0224 10:37:58.221724 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48918: no serving certificate available for the kubelet" Feb 24 10:38:00 crc kubenswrapper[4755]: I0224 10:38:00.317798 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48932: no serving certificate available for the kubelet" Feb 24 10:38:01 crc kubenswrapper[4755]: I0224 10:38:01.273678 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48944: no serving certificate available for the kubelet" Feb 24 10:38:03 crc kubenswrapper[4755]: I0224 10:38:03.378904 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48946: no serving certificate available for the kubelet" Feb 24 10:38:04 crc kubenswrapper[4755]: I0224 10:38:04.413998 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39200: no serving certificate available for the kubelet" Feb 24 10:38:06 crc kubenswrapper[4755]: I0224 10:38:06.436806 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39212: no serving certificate available for the kubelet" Feb 24 10:38:07 crc kubenswrapper[4755]: I0224 10:38:07.454506 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39220: no serving certificate available for the kubelet" Feb 24 10:38:09 crc kubenswrapper[4755]: I0224 10:38:09.316998 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:38:09 crc kubenswrapper[4755]: E0224 10:38:09.318573 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:38:09 crc kubenswrapper[4755]: I0224 10:38:09.480473 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39222: no serving certificate available for the kubelet" Feb 24 10:38:10 crc kubenswrapper[4755]: I0224 10:38:10.511390 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39230: no serving certificate available for the kubelet" Feb 24 10:38:12 crc kubenswrapper[4755]: I0224 10:38:12.545214 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39240: no serving certificate available for the kubelet" Feb 24 10:38:13 crc kubenswrapper[4755]: I0224 10:38:13.579393 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39242: no serving certificate available for the kubelet" Feb 24 10:38:15 crc kubenswrapper[4755]: I0224 10:38:15.600659 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41178: no serving certificate available for the kubelet" Feb 24 10:38:16 crc kubenswrapper[4755]: I0224 10:38:16.617127 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41192: no serving certificate available for the kubelet" Feb 24 10:38:18 crc kubenswrapper[4755]: I0224 10:38:18.654239 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41200: no serving certificate available for the kubelet" Feb 24 10:38:19 crc kubenswrapper[4755]: I0224 10:38:19.715051 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41204: no serving certificate available for the kubelet" Feb 24 10:38:21 crc kubenswrapper[4755]: I0224 10:38:21.704520 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41206: no serving certificate available for the kubelet" Feb 24 10:38:22 crc kubenswrapper[4755]: I0224 10:38:22.316911 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:38:22 crc kubenswrapper[4755]: E0224 10:38:22.317716 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:38:22 crc kubenswrapper[4755]: I0224 10:38:22.778130 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41218: no serving certificate available for the kubelet" Feb 24 10:38:24 crc kubenswrapper[4755]: I0224 10:38:24.771983 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44538: no serving certificate available for the kubelet" Feb 24 10:38:25 crc kubenswrapper[4755]: I0224 10:38:25.829172 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44544: no serving certificate available for the kubelet" Feb 24 10:38:27 crc kubenswrapper[4755]: I0224 10:38:27.823424 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44552: no serving certificate available for the kubelet" Feb 24 10:38:28 crc kubenswrapper[4755]: I0224 10:38:28.887025 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44558: no serving certificate available for the kubelet" Feb 24 10:38:30 crc kubenswrapper[4755]: I0224 10:38:30.880444 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44570: no serving certificate available for the kubelet" Feb 24 10:38:31 crc kubenswrapper[4755]: I0224 10:38:31.934656 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44572: no serving certificate available for the kubelet" Feb 24 10:38:33 crc kubenswrapper[4755]: I0224 10:38:33.935768 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49606: no serving certificate available for the kubelet" Feb 24 10:38:34 crc kubenswrapper[4755]: I0224 10:38:34.985559 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49618: no serving certificate available for the kubelet" Feb 24 10:38:37 crc kubenswrapper[4755]: I0224 10:38:37.028459 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49634: no serving certificate available for the kubelet" Feb 24 10:38:37 crc kubenswrapper[4755]: I0224 10:38:37.317492 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:38:37 crc kubenswrapper[4755]: E0224 10:38:37.318018 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:38:38 crc kubenswrapper[4755]: I0224 10:38:38.027772 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49648: no serving certificate available for the kubelet" Feb 24 10:38:40 crc kubenswrapper[4755]: I0224 10:38:40.070209 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49650: no serving certificate available for the kubelet" Feb 24 10:38:41 crc kubenswrapper[4755]: I0224 10:38:41.079333 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49664: no serving certificate available for the kubelet" Feb 24 10:38:43 crc kubenswrapper[4755]: I0224 10:38:43.133528 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49666: no serving certificate available for the kubelet" Feb 24 10:38:44 crc kubenswrapper[4755]: I0224 10:38:44.151269 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36396: no serving certificate available for the kubelet" Feb 24 10:38:46 crc kubenswrapper[4755]: I0224 10:38:46.191132 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36402: no serving certificate available for the kubelet" Feb 24 10:38:47 crc kubenswrapper[4755]: I0224 10:38:47.205943 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36416: no serving certificate available for the kubelet" Feb 24 10:38:48 crc kubenswrapper[4755]: I0224 10:38:48.317350 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:38:48 crc kubenswrapper[4755]: E0224 10:38:48.318118 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:38:49 crc kubenswrapper[4755]: I0224 10:38:49.299419 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36422: no serving certificate available for the kubelet" Feb 24 10:38:49 crc kubenswrapper[4755]: I0224 10:38:49.422844 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" probeResult="failure" output=< Feb 24 10:38:49 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:38:49 crc kubenswrapper[4755]: > Feb 24 10:38:49 crc kubenswrapper[4755]: I0224 10:38:49.424284 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:38:49 crc kubenswrapper[4755]: I0224 10:38:49.425596 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"800f98376a75908f371f36101dfbbd70164bb3acb8517901561bd0e5eddf4c78"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:38:49 crc kubenswrapper[4755]: I0224 10:38:49.511373 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" containerID="cri-o://800f98376a75908f371f36101dfbbd70164bb3acb8517901561bd0e5eddf4c78" gracePeriod=30 Feb 24 10:38:50 crc kubenswrapper[4755]: I0224 10:38:50.253502 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36424: no serving certificate available for the kubelet" Feb 24 10:38:50 crc kubenswrapper[4755]: I0224 10:38:50.314385 4755 generic.go:334] "Generic (PLEG): container finished" podID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerID="800f98376a75908f371f36101dfbbd70164bb3acb8517901561bd0e5eddf4c78" exitCode=143 Feb 24 10:38:50 crc kubenswrapper[4755]: I0224 10:38:50.314446 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerDied","Data":"800f98376a75908f371f36101dfbbd70164bb3acb8517901561bd0e5eddf4c78"} Feb 24 10:38:50 crc kubenswrapper[4755]: I0224 10:38:50.314485 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerStarted","Data":"63b43b5bff9a2589978e4d97ca45ea305c524e73be8cfeed915cc1ece0bd0323"} Feb 24 10:38:50 crc kubenswrapper[4755]: I0224 10:38:50.314515 4755 scope.go:117] "RemoveContainer" containerID="129999cf2c6f5d78b8ad9cb4b1c99cac169f2012398f47f0cbba1aba19702257" Feb 24 10:38:51 crc kubenswrapper[4755]: I0224 10:38:51.135234 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" probeResult="failure" output=< Feb 24 10:38:51 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:38:51 crc kubenswrapper[4755]: > Feb 24 10:38:51 crc kubenswrapper[4755]: I0224 10:38:51.135690 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:38:51 crc kubenswrapper[4755]: I0224 10:38:51.329091 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"249cff2e52b9673481d5d24fc8158108cdf8f7eb45d935cdf1ec11a460574df5"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:38:51 crc kubenswrapper[4755]: I0224 10:38:51.399992 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" containerID="cri-o://249cff2e52b9673481d5d24fc8158108cdf8f7eb45d935cdf1ec11a460574df5" gracePeriod=30 Feb 24 10:38:52 crc kubenswrapper[4755]: I0224 10:38:52.341372 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36440: no serving certificate available for the kubelet" Feb 24 10:38:52 crc kubenswrapper[4755]: I0224 10:38:52.342921 4755 generic.go:334] "Generic (PLEG): container finished" podID="2f320527-691f-48e9-a243-f60bc805da39" containerID="249cff2e52b9673481d5d24fc8158108cdf8f7eb45d935cdf1ec11a460574df5" exitCode=143 Feb 24 10:38:52 crc kubenswrapper[4755]: I0224 10:38:52.342978 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerDied","Data":"249cff2e52b9673481d5d24fc8158108cdf8f7eb45d935cdf1ec11a460574df5"} Feb 24 10:38:52 crc kubenswrapper[4755]: I0224 10:38:52.343029 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerStarted","Data":"a84b3cec371eaf91d0e311ef88e8da2d9e2fb0557b9c34f1a91d87c697bed797"} Feb 24 10:38:52 crc kubenswrapper[4755]: I0224 10:38:52.343050 4755 scope.go:117] "RemoveContainer" containerID="a8d889c52bcf05c875239ad26fdbe4df1a47c88c54129a3d9d3a9cf0739230fe" Feb 24 10:38:53 crc kubenswrapper[4755]: I0224 10:38:53.309973 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36456: no serving certificate available for the kubelet" Feb 24 10:38:55 crc kubenswrapper[4755]: I0224 10:38:55.410217 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33838: no serving certificate available for the kubelet" Feb 24 10:38:56 crc kubenswrapper[4755]: I0224 10:38:56.363655 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33848: no serving certificate available for the kubelet" Feb 24 10:38:58 crc kubenswrapper[4755]: I0224 10:38:58.483717 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33856: no serving certificate available for the kubelet" Feb 24 10:38:59 crc kubenswrapper[4755]: I0224 10:38:59.425353 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33862: no serving certificate available for the kubelet" Feb 24 10:38:59 crc kubenswrapper[4755]: I0224 10:38:59.887780 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:38:59 crc kubenswrapper[4755]: I0224 10:38:59.887852 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 10:39:01 crc kubenswrapper[4755]: I0224 10:39:01.289621 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 10:39:01 crc kubenswrapper[4755]: I0224 10:39:01.289754 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:39:01 crc kubenswrapper[4755]: I0224 10:39:01.317615 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:39:01 crc kubenswrapper[4755]: E0224 10:39:01.318215 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:39:01 crc kubenswrapper[4755]: I0224 10:39:01.540692 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33870: no serving certificate available for the kubelet" Feb 24 10:39:02 crc kubenswrapper[4755]: I0224 10:39:02.471821 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33878: no serving certificate available for the kubelet" Feb 24 10:39:03 crc kubenswrapper[4755]: I0224 10:39:03.233190 4755 ???:1] "http: TLS handshake error from 192.168.126.11:33882: no serving certificate available for the kubelet" Feb 24 10:39:04 crc kubenswrapper[4755]: I0224 10:39:04.603955 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49026: no serving certificate available for the kubelet" Feb 24 10:39:05 crc kubenswrapper[4755]: I0224 10:39:05.513522 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49032: no serving certificate available for the kubelet" Feb 24 10:39:07 crc kubenswrapper[4755]: I0224 10:39:07.651022 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49034: no serving certificate available for the kubelet" Feb 24 10:39:08 crc kubenswrapper[4755]: I0224 10:39:08.560491 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49050: no serving certificate available for the kubelet" Feb 24 10:39:10 crc kubenswrapper[4755]: I0224 10:39:10.717865 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49056: no serving certificate available for the kubelet" Feb 24 10:39:11 crc kubenswrapper[4755]: I0224 10:39:11.622151 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49066: no serving certificate available for the kubelet" Feb 24 10:39:12 crc kubenswrapper[4755]: I0224 10:39:12.316961 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:39:12 crc kubenswrapper[4755]: E0224 10:39:12.317257 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:39:13 crc kubenswrapper[4755]: I0224 10:39:13.778533 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43254: no serving certificate available for the kubelet" Feb 24 10:39:14 crc kubenswrapper[4755]: I0224 10:39:14.677009 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43266: no serving certificate available for the kubelet" Feb 24 10:39:16 crc kubenswrapper[4755]: I0224 10:39:16.839185 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43276: no serving certificate available for the kubelet" Feb 24 10:39:17 crc kubenswrapper[4755]: I0224 10:39:17.793568 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43292: no serving certificate available for the kubelet" Feb 24 10:39:19 crc kubenswrapper[4755]: I0224 10:39:19.892949 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43308: no serving certificate available for the kubelet" Feb 24 10:39:20 crc kubenswrapper[4755]: I0224 10:39:20.851356 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43314: no serving certificate available for the kubelet" Feb 24 10:39:22 crc kubenswrapper[4755]: I0224 10:39:22.931684 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43326: no serving certificate available for the kubelet" Feb 24 10:39:23 crc kubenswrapper[4755]: I0224 10:39:23.920548 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44714: no serving certificate available for the kubelet" Feb 24 10:39:26 crc kubenswrapper[4755]: I0224 10:39:26.005918 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44718: no serving certificate available for the kubelet" Feb 24 10:39:26 crc kubenswrapper[4755]: I0224 10:39:26.330911 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:39:26 crc kubenswrapper[4755]: E0224 10:39:26.331397 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:39:26 crc kubenswrapper[4755]: I0224 10:39:26.980144 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44724: no serving certificate available for the kubelet" Feb 24 10:39:29 crc kubenswrapper[4755]: I0224 10:39:29.057249 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44734: no serving certificate available for the kubelet" Feb 24 10:39:30 crc kubenswrapper[4755]: I0224 10:39:30.038622 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44736: no serving certificate available for the kubelet" Feb 24 10:39:32 crc kubenswrapper[4755]: I0224 10:39:32.124271 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44750: no serving certificate available for the kubelet" Feb 24 10:39:33 crc kubenswrapper[4755]: I0224 10:39:33.093638 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44752: no serving certificate available for the kubelet" Feb 24 10:39:35 crc kubenswrapper[4755]: I0224 10:39:35.181109 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52860: no serving certificate available for the kubelet" Feb 24 10:39:36 crc kubenswrapper[4755]: I0224 10:39:36.153750 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52872: no serving certificate available for the kubelet" Feb 24 10:39:37 crc kubenswrapper[4755]: I0224 10:39:37.316863 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:39:37 crc kubenswrapper[4755]: E0224 10:39:37.317775 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:39:38 crc kubenswrapper[4755]: I0224 10:39:38.238813 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52878: no serving certificate available for the kubelet" Feb 24 10:39:39 crc kubenswrapper[4755]: I0224 10:39:39.213207 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52886: no serving certificate available for the kubelet" Feb 24 10:39:41 crc kubenswrapper[4755]: I0224 10:39:41.300665 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52890: no serving certificate available for the kubelet" Feb 24 10:39:42 crc kubenswrapper[4755]: I0224 10:39:42.272467 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52896: no serving certificate available for the kubelet" Feb 24 10:39:44 crc kubenswrapper[4755]: I0224 10:39:44.344568 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41968: no serving certificate available for the kubelet" Feb 24 10:39:45 crc kubenswrapper[4755]: I0224 10:39:45.333024 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41970: no serving certificate available for the kubelet" Feb 24 10:39:47 crc kubenswrapper[4755]: I0224 10:39:47.383708 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41972: no serving certificate available for the kubelet" Feb 24 10:39:48 crc kubenswrapper[4755]: I0224 10:39:48.318312 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:39:48 crc kubenswrapper[4755]: E0224 10:39:48.318789 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:39:48 crc kubenswrapper[4755]: I0224 10:39:48.429295 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41974: no serving certificate available for the kubelet" Feb 24 10:39:50 crc kubenswrapper[4755]: I0224 10:39:50.422902 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41984: no serving certificate available for the kubelet" Feb 24 10:39:51 crc kubenswrapper[4755]: I0224 10:39:51.473737 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42000: no serving certificate available for the kubelet" Feb 24 10:39:53 crc kubenswrapper[4755]: I0224 10:39:53.467418 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42014: no serving certificate available for the kubelet" Feb 24 10:39:54 crc kubenswrapper[4755]: I0224 10:39:54.527905 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56652: no serving certificate available for the kubelet" Feb 24 10:39:56 crc kubenswrapper[4755]: I0224 10:39:56.515167 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56664: no serving certificate available for the kubelet" Feb 24 10:39:57 crc kubenswrapper[4755]: I0224 10:39:57.565143 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56666: no serving certificate available for the kubelet" Feb 24 10:39:59 crc kubenswrapper[4755]: I0224 10:39:59.577957 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56674: no serving certificate available for the kubelet" Feb 24 10:40:00 crc kubenswrapper[4755]: I0224 10:40:00.599884 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56684: no serving certificate available for the kubelet" Feb 24 10:40:02 crc kubenswrapper[4755]: I0224 10:40:02.316659 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:40:02 crc kubenswrapper[4755]: I0224 10:40:02.617570 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56696: no serving certificate available for the kubelet" Feb 24 10:40:03 crc kubenswrapper[4755]: I0224 10:40:03.032007 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"e6973520b04ad15380b7bffb930a3b7d08f7087e679d5562e13d81f8ff1a623f"} Feb 24 10:40:03 crc kubenswrapper[4755]: I0224 10:40:03.637266 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56702: no serving certificate available for the kubelet" Feb 24 10:40:05 crc kubenswrapper[4755]: I0224 10:40:05.661816 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59016: no serving certificate available for the kubelet" Feb 24 10:40:06 crc kubenswrapper[4755]: I0224 10:40:06.681218 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59028: no serving certificate available for the kubelet" Feb 24 10:40:08 crc kubenswrapper[4755]: I0224 10:40:08.706035 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59030: no serving certificate available for the kubelet" Feb 24 10:40:09 crc kubenswrapper[4755]: I0224 10:40:09.726756 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59046: no serving certificate available for the kubelet" Feb 24 10:40:11 crc kubenswrapper[4755]: I0224 10:40:11.759777 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59062: no serving certificate available for the kubelet" Feb 24 10:40:12 crc kubenswrapper[4755]: I0224 10:40:12.788745 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59066: no serving certificate available for the kubelet" Feb 24 10:40:14 crc kubenswrapper[4755]: I0224 10:40:14.804557 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41896: no serving certificate available for the kubelet" Feb 24 10:40:15 crc kubenswrapper[4755]: I0224 10:40:15.842969 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41902: no serving certificate available for the kubelet" Feb 24 10:40:17 crc kubenswrapper[4755]: I0224 10:40:17.845319 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41910: no serving certificate available for the kubelet" Feb 24 10:40:18 crc kubenswrapper[4755]: I0224 10:40:18.886672 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41912: no serving certificate available for the kubelet" Feb 24 10:40:20 crc kubenswrapper[4755]: I0224 10:40:20.898254 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41924: no serving certificate available for the kubelet" Feb 24 10:40:21 crc kubenswrapper[4755]: I0224 10:40:21.944943 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41934: no serving certificate available for the kubelet" Feb 24 10:40:23 crc kubenswrapper[4755]: I0224 10:40:23.956444 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50434: no serving certificate available for the kubelet" Feb 24 10:40:24 crc kubenswrapper[4755]: I0224 10:40:24.981247 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50446: no serving certificate available for the kubelet" Feb 24 10:40:27 crc kubenswrapper[4755]: I0224 10:40:27.015037 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50460: no serving certificate available for the kubelet" Feb 24 10:40:28 crc kubenswrapper[4755]: I0224 10:40:28.032659 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50468: no serving certificate available for the kubelet" Feb 24 10:40:30 crc kubenswrapper[4755]: I0224 10:40:30.100978 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50482: no serving certificate available for the kubelet" Feb 24 10:40:30 crc kubenswrapper[4755]: I0224 10:40:30.243974 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50488: no serving certificate available for the kubelet" Feb 24 10:40:31 crc kubenswrapper[4755]: I0224 10:40:31.077271 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50490: no serving certificate available for the kubelet" Feb 24 10:40:32 crc kubenswrapper[4755]: I0224 10:40:32.005248 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50494: no serving certificate available for the kubelet" Feb 24 10:40:33 crc kubenswrapper[4755]: I0224 10:40:33.169779 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50504: no serving certificate available for the kubelet" Feb 24 10:40:34 crc kubenswrapper[4755]: I0224 10:40:34.129998 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52856: no serving certificate available for the kubelet" Feb 24 10:40:36 crc kubenswrapper[4755]: I0224 10:40:36.226936 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52862: no serving certificate available for the kubelet" Feb 24 10:40:37 crc kubenswrapper[4755]: I0224 10:40:37.179624 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52868: no serving certificate available for the kubelet" Feb 24 10:40:39 crc kubenswrapper[4755]: I0224 10:40:39.284213 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52880: no serving certificate available for the kubelet" Feb 24 10:40:40 crc kubenswrapper[4755]: I0224 10:40:40.239276 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52888: no serving certificate available for the kubelet" Feb 24 10:40:42 crc kubenswrapper[4755]: I0224 10:40:42.345972 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52894: no serving certificate available for the kubelet" Feb 24 10:40:43 crc kubenswrapper[4755]: I0224 10:40:43.284349 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52902: no serving certificate available for the kubelet" Feb 24 10:40:45 crc kubenswrapper[4755]: I0224 10:40:45.443668 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49596: no serving certificate available for the kubelet" Feb 24 10:40:46 crc kubenswrapper[4755]: I0224 10:40:46.332919 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49612: no serving certificate available for the kubelet" Feb 24 10:40:48 crc kubenswrapper[4755]: I0224 10:40:48.495282 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49624: no serving certificate available for the kubelet" Feb 24 10:40:49 crc kubenswrapper[4755]: I0224 10:40:49.366612 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49628: no serving certificate available for the kubelet" Feb 24 10:40:51 crc kubenswrapper[4755]: I0224 10:40:51.564430 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49634: no serving certificate available for the kubelet" Feb 24 10:40:52 crc kubenswrapper[4755]: I0224 10:40:52.422443 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49644: no serving certificate available for the kubelet" Feb 24 10:40:54 crc kubenswrapper[4755]: I0224 10:40:54.620609 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43080: no serving certificate available for the kubelet" Feb 24 10:40:55 crc kubenswrapper[4755]: I0224 10:40:55.460577 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43094: no serving certificate available for the kubelet" Feb 24 10:40:57 crc kubenswrapper[4755]: I0224 10:40:57.700187 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43110: no serving certificate available for the kubelet" Feb 24 10:40:58 crc kubenswrapper[4755]: I0224 10:40:58.518807 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43118: no serving certificate available for the kubelet" Feb 24 10:41:00 crc kubenswrapper[4755]: I0224 10:41:00.737734 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43122: no serving certificate available for the kubelet" Feb 24 10:41:01 crc kubenswrapper[4755]: I0224 10:41:01.577427 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43138: no serving certificate available for the kubelet" Feb 24 10:41:03 crc kubenswrapper[4755]: I0224 10:41:03.792915 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46312: no serving certificate available for the kubelet" Feb 24 10:41:04 crc kubenswrapper[4755]: I0224 10:41:04.630843 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46318: no serving certificate available for the kubelet" Feb 24 10:41:06 crc kubenswrapper[4755]: I0224 10:41:06.854372 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46326: no serving certificate available for the kubelet" Feb 24 10:41:07 crc kubenswrapper[4755]: I0224 10:41:07.683145 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46332: no serving certificate available for the kubelet" Feb 24 10:41:09 crc kubenswrapper[4755]: I0224 10:41:09.914965 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46340: no serving certificate available for the kubelet" Feb 24 10:41:10 crc kubenswrapper[4755]: I0224 10:41:10.748101 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46350: no serving certificate available for the kubelet" Feb 24 10:41:12 crc kubenswrapper[4755]: I0224 10:41:12.998211 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46364: no serving certificate available for the kubelet" Feb 24 10:41:13 crc kubenswrapper[4755]: I0224 10:41:13.780356 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52752: no serving certificate available for the kubelet" Feb 24 10:41:16 crc kubenswrapper[4755]: I0224 10:41:16.051262 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52766: no serving certificate available for the kubelet" Feb 24 10:41:16 crc kubenswrapper[4755]: I0224 10:41:16.833975 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52770: no serving certificate available for the kubelet" Feb 24 10:41:19 crc kubenswrapper[4755]: I0224 10:41:19.106423 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52774: no serving certificate available for the kubelet" Feb 24 10:41:19 crc kubenswrapper[4755]: I0224 10:41:19.890952 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52790: no serving certificate available for the kubelet" Feb 24 10:41:22 crc kubenswrapper[4755]: I0224 10:41:22.147279 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52802: no serving certificate available for the kubelet" Feb 24 10:41:22 crc kubenswrapper[4755]: I0224 10:41:22.942620 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52814: no serving certificate available for the kubelet" Feb 24 10:41:25 crc kubenswrapper[4755]: I0224 10:41:25.201513 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55388: no serving certificate available for the kubelet" Feb 24 10:41:25 crc kubenswrapper[4755]: I0224 10:41:25.987998 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55394: no serving certificate available for the kubelet" Feb 24 10:41:28 crc kubenswrapper[4755]: I0224 10:41:28.290005 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55408: no serving certificate available for the kubelet" Feb 24 10:41:29 crc kubenswrapper[4755]: I0224 10:41:29.042683 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55412: no serving certificate available for the kubelet" Feb 24 10:41:31 crc kubenswrapper[4755]: I0224 10:41:31.351382 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55422: no serving certificate available for the kubelet" Feb 24 10:41:32 crc kubenswrapper[4755]: I0224 10:41:32.100606 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55432: no serving certificate available for the kubelet" Feb 24 10:41:34 crc kubenswrapper[4755]: I0224 10:41:34.454831 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46240: no serving certificate available for the kubelet" Feb 24 10:41:35 crc kubenswrapper[4755]: I0224 10:41:35.167469 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46252: no serving certificate available for the kubelet" Feb 24 10:41:37 crc kubenswrapper[4755]: I0224 10:41:37.502412 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46260: no serving certificate available for the kubelet" Feb 24 10:41:38 crc kubenswrapper[4755]: I0224 10:41:38.225325 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46274: no serving certificate available for the kubelet" Feb 24 10:41:40 crc kubenswrapper[4755]: I0224 10:41:40.559750 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46286: no serving certificate available for the kubelet" Feb 24 10:41:41 crc kubenswrapper[4755]: I0224 10:41:41.297411 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46290: no serving certificate available for the kubelet" Feb 24 10:41:43 crc kubenswrapper[4755]: I0224 10:41:43.617719 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46292: no serving certificate available for the kubelet" Feb 24 10:41:44 crc kubenswrapper[4755]: I0224 10:41:44.358922 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41198: no serving certificate available for the kubelet" Feb 24 10:41:46 crc kubenswrapper[4755]: I0224 10:41:46.682060 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41208: no serving certificate available for the kubelet" Feb 24 10:41:47 crc kubenswrapper[4755]: I0224 10:41:47.413286 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41214: no serving certificate available for the kubelet" Feb 24 10:41:49 crc kubenswrapper[4755]: I0224 10:41:49.728617 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41216: no serving certificate available for the kubelet" Feb 24 10:41:50 crc kubenswrapper[4755]: I0224 10:41:50.459611 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41222: no serving certificate available for the kubelet" Feb 24 10:41:52 crc kubenswrapper[4755]: I0224 10:41:52.780181 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41224: no serving certificate available for the kubelet" Feb 24 10:41:53 crc kubenswrapper[4755]: I0224 10:41:53.521144 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41228: no serving certificate available for the kubelet" Feb 24 10:41:55 crc kubenswrapper[4755]: I0224 10:41:55.828650 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34726: no serving certificate available for the kubelet" Feb 24 10:41:56 crc kubenswrapper[4755]: I0224 10:41:56.563250 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34736: no serving certificate available for the kubelet" Feb 24 10:41:58 crc kubenswrapper[4755]: I0224 10:41:58.883192 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34752: no serving certificate available for the kubelet" Feb 24 10:41:59 crc kubenswrapper[4755]: I0224 10:41:59.610694 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34766: no serving certificate available for the kubelet" Feb 24 10:42:01 crc kubenswrapper[4755]: I0224 10:42:01.940160 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34778: no serving certificate available for the kubelet" Feb 24 10:42:02 crc kubenswrapper[4755]: I0224 10:42:02.650759 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34788: no serving certificate available for the kubelet" Feb 24 10:42:04 crc kubenswrapper[4755]: I0224 10:42:04.980124 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51990: no serving certificate available for the kubelet" Feb 24 10:42:05 crc kubenswrapper[4755]: I0224 10:42:05.698404 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52000: no serving certificate available for the kubelet" Feb 24 10:42:08 crc kubenswrapper[4755]: I0224 10:42:08.029035 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52014: no serving certificate available for the kubelet" Feb 24 10:42:08 crc kubenswrapper[4755]: I0224 10:42:08.792857 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52028: no serving certificate available for the kubelet" Feb 24 10:42:11 crc kubenswrapper[4755]: I0224 10:42:11.083344 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52036: no serving certificate available for the kubelet" Feb 24 10:42:11 crc kubenswrapper[4755]: I0224 10:42:11.850650 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52038: no serving certificate available for the kubelet" Feb 24 10:42:14 crc kubenswrapper[4755]: I0224 10:42:14.144874 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58556: no serving certificate available for the kubelet" Feb 24 10:42:14 crc kubenswrapper[4755]: I0224 10:42:14.896052 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58562: no serving certificate available for the kubelet" Feb 24 10:42:17 crc kubenswrapper[4755]: I0224 10:42:17.189488 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58564: no serving certificate available for the kubelet" Feb 24 10:42:17 crc kubenswrapper[4755]: I0224 10:42:17.946436 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58580: no serving certificate available for the kubelet" Feb 24 10:42:20 crc kubenswrapper[4755]: I0224 10:42:20.226463 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58586: no serving certificate available for the kubelet" Feb 24 10:42:21 crc kubenswrapper[4755]: I0224 10:42:21.025423 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58602: no serving certificate available for the kubelet" Feb 24 10:42:21 crc kubenswrapper[4755]: I0224 10:42:21.695220 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:42:21 crc kubenswrapper[4755]: I0224 10:42:21.695318 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:42:23 crc kubenswrapper[4755]: I0224 10:42:23.288678 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58606: no serving certificate available for the kubelet" Feb 24 10:42:24 crc kubenswrapper[4755]: I0224 10:42:24.113213 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53970: no serving certificate available for the kubelet" Feb 24 10:42:26 crc kubenswrapper[4755]: I0224 10:42:26.356849 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53972: no serving certificate available for the kubelet" Feb 24 10:42:27 crc kubenswrapper[4755]: I0224 10:42:27.159968 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53980: no serving certificate available for the kubelet" Feb 24 10:42:29 crc kubenswrapper[4755]: I0224 10:42:29.410962 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53992: no serving certificate available for the kubelet" Feb 24 10:42:30 crc kubenswrapper[4755]: I0224 10:42:30.206865 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54000: no serving certificate available for the kubelet" Feb 24 10:42:32 crc kubenswrapper[4755]: I0224 10:42:32.468606 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54008: no serving certificate available for the kubelet" Feb 24 10:42:33 crc kubenswrapper[4755]: I0224 10:42:33.263431 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54012: no serving certificate available for the kubelet" Feb 24 10:42:35 crc kubenswrapper[4755]: I0224 10:42:35.526983 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52646: no serving certificate available for the kubelet" Feb 24 10:42:36 crc kubenswrapper[4755]: I0224 10:42:36.320516 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52658: no serving certificate available for the kubelet" Feb 24 10:42:38 crc kubenswrapper[4755]: I0224 10:42:38.575188 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52672: no serving certificate available for the kubelet" Feb 24 10:42:39 crc kubenswrapper[4755]: I0224 10:42:39.366355 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52688: no serving certificate available for the kubelet" Feb 24 10:42:41 crc kubenswrapper[4755]: I0224 10:42:41.621385 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52694: no serving certificate available for the kubelet" Feb 24 10:42:42 crc kubenswrapper[4755]: I0224 10:42:42.417957 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52710: no serving certificate available for the kubelet" Feb 24 10:42:44 crc kubenswrapper[4755]: I0224 10:42:44.671328 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55794: no serving certificate available for the kubelet" Feb 24 10:42:45 crc kubenswrapper[4755]: I0224 10:42:45.459810 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55798: no serving certificate available for the kubelet" Feb 24 10:42:47 crc kubenswrapper[4755]: I0224 10:42:47.729332 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55800: no serving certificate available for the kubelet" Feb 24 10:42:48 crc kubenswrapper[4755]: I0224 10:42:48.506503 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55802: no serving certificate available for the kubelet" Feb 24 10:42:50 crc kubenswrapper[4755]: I0224 10:42:50.797461 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55810: no serving certificate available for the kubelet" Feb 24 10:42:51 crc kubenswrapper[4755]: I0224 10:42:51.572955 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55816: no serving certificate available for the kubelet" Feb 24 10:42:51 crc kubenswrapper[4755]: I0224 10:42:51.695150 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:42:51 crc kubenswrapper[4755]: I0224 10:42:51.695238 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:42:53 crc kubenswrapper[4755]: I0224 10:42:53.861977 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47480: no serving certificate available for the kubelet" Feb 24 10:42:54 crc kubenswrapper[4755]: I0224 10:42:54.633538 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47486: no serving certificate available for the kubelet" Feb 24 10:42:56 crc kubenswrapper[4755]: I0224 10:42:56.901765 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47490: no serving certificate available for the kubelet" Feb 24 10:42:57 crc kubenswrapper[4755]: I0224 10:42:57.681568 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47496: no serving certificate available for the kubelet" Feb 24 10:42:59 crc kubenswrapper[4755]: I0224 10:42:59.612444 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" probeResult="failure" output=< Feb 24 10:42:59 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:42:59 crc kubenswrapper[4755]: > Feb 24 10:42:59 crc kubenswrapper[4755]: I0224 10:42:59.612902 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:42:59 crc kubenswrapper[4755]: I0224 10:42:59.613611 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"63b43b5bff9a2589978e4d97ca45ea305c524e73be8cfeed915cc1ece0bd0323"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:42:59 crc kubenswrapper[4755]: I0224 10:42:59.668230 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" containerID="cri-o://63b43b5bff9a2589978e4d97ca45ea305c524e73be8cfeed915cc1ece0bd0323" gracePeriod=30 Feb 24 10:42:59 crc kubenswrapper[4755]: I0224 10:42:59.953216 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47498: no serving certificate available for the kubelet" Feb 24 10:43:00 crc kubenswrapper[4755]: I0224 10:43:00.783524 4755 generic.go:334] "Generic (PLEG): container finished" podID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerID="63b43b5bff9a2589978e4d97ca45ea305c524e73be8cfeed915cc1ece0bd0323" exitCode=143 Feb 24 10:43:00 crc kubenswrapper[4755]: I0224 10:43:00.783582 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerDied","Data":"63b43b5bff9a2589978e4d97ca45ea305c524e73be8cfeed915cc1ece0bd0323"} Feb 24 10:43:00 crc kubenswrapper[4755]: I0224 10:43:00.783619 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerStarted","Data":"3a66d8dae34f6f20bcc5933cf924596e95ac093c06c4a681a04907cdab739f53"} Feb 24 10:43:00 crc kubenswrapper[4755]: I0224 10:43:00.783646 4755 scope.go:117] "RemoveContainer" containerID="800f98376a75908f371f36101dfbbd70164bb3acb8517901561bd0e5eddf4c78" Feb 24 10:43:00 crc kubenswrapper[4755]: I0224 10:43:00.793972 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47506: no serving certificate available for the kubelet" Feb 24 10:43:02 crc kubenswrapper[4755]: I0224 10:43:02.859381 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" probeResult="failure" output=< Feb 24 10:43:02 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:43:02 crc kubenswrapper[4755]: > Feb 24 10:43:02 crc kubenswrapper[4755]: I0224 10:43:02.860345 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:43:02 crc kubenswrapper[4755]: I0224 10:43:02.860930 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"a84b3cec371eaf91d0e311ef88e8da2d9e2fb0557b9c34f1a91d87c697bed797"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:43:02 crc kubenswrapper[4755]: I0224 10:43:02.930393 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" containerID="cri-o://a84b3cec371eaf91d0e311ef88e8da2d9e2fb0557b9c34f1a91d87c697bed797" gracePeriod=30 Feb 24 10:43:02 crc kubenswrapper[4755]: I0224 10:43:02.998604 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47516: no serving certificate available for the kubelet" Feb 24 10:43:03 crc kubenswrapper[4755]: I0224 10:43:03.816606 4755 generic.go:334] "Generic (PLEG): container finished" podID="2f320527-691f-48e9-a243-f60bc805da39" containerID="a84b3cec371eaf91d0e311ef88e8da2d9e2fb0557b9c34f1a91d87c697bed797" exitCode=143 Feb 24 10:43:03 crc kubenswrapper[4755]: I0224 10:43:03.816729 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerDied","Data":"a84b3cec371eaf91d0e311ef88e8da2d9e2fb0557b9c34f1a91d87c697bed797"} Feb 24 10:43:03 crc kubenswrapper[4755]: I0224 10:43:03.817180 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerStarted","Data":"ef4ca47fbdee8a817270c1b69e52b17aa4e27f34d8d6eed7da45026d0c6a003d"} Feb 24 10:43:03 crc kubenswrapper[4755]: I0224 10:43:03.817246 4755 scope.go:117] "RemoveContainer" containerID="249cff2e52b9673481d5d24fc8158108cdf8f7eb45d935cdf1ec11a460574df5" Feb 24 10:43:03 crc kubenswrapper[4755]: I0224 10:43:03.842122 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40072: no serving certificate available for the kubelet" Feb 24 10:43:06 crc kubenswrapper[4755]: I0224 10:43:06.048625 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40086: no serving certificate available for the kubelet" Feb 24 10:43:06 crc kubenswrapper[4755]: E0224 10:43:06.805584 4755 certificate_manager.go:579] "Unhandled Error" err="kubernetes.io/kubelet-serving: certificate request was not signed: timed out waiting for the condition" logger="UnhandledError" Feb 24 10:43:06 crc kubenswrapper[4755]: I0224 10:43:06.904983 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40092: no serving certificate available for the kubelet" Feb 24 10:43:09 crc kubenswrapper[4755]: I0224 10:43:09.099769 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40100: no serving certificate available for the kubelet" Feb 24 10:43:09 crc kubenswrapper[4755]: I0224 10:43:09.887605 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 10:43:09 crc kubenswrapper[4755]: I0224 10:43:09.888104 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:43:09 crc kubenswrapper[4755]: I0224 10:43:09.962322 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40112: no serving certificate available for the kubelet" Feb 24 10:43:11 crc kubenswrapper[4755]: I0224 10:43:11.289262 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 10:43:11 crc kubenswrapper[4755]: I0224 10:43:11.289324 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:43:12 crc kubenswrapper[4755]: I0224 10:43:12.145379 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40114: no serving certificate available for the kubelet" Feb 24 10:43:13 crc kubenswrapper[4755]: I0224 10:43:13.020633 4755 ???:1] "http: TLS handshake error from 192.168.126.11:40118: no serving certificate available for the kubelet" Feb 24 10:43:15 crc kubenswrapper[4755]: I0224 10:43:15.084203 4755 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 10:43:15 crc kubenswrapper[4755]: I0224 10:43:15.095125 4755 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 10:43:15 crc kubenswrapper[4755]: I0224 10:43:15.110996 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41144: no serving certificate available for the kubelet" Feb 24 10:43:15 crc kubenswrapper[4755]: I0224 10:43:15.149326 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41150: no serving certificate available for the kubelet" Feb 24 10:43:15 crc kubenswrapper[4755]: I0224 10:43:15.185733 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41152: no serving certificate available for the kubelet" Feb 24 10:43:15 crc kubenswrapper[4755]: I0224 10:43:15.185969 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41154: no serving certificate available for the kubelet" Feb 24 10:43:15 crc kubenswrapper[4755]: I0224 10:43:15.229791 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41158: no serving certificate available for the kubelet" Feb 24 10:43:15 crc kubenswrapper[4755]: I0224 10:43:15.298353 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41168: no serving certificate available for the kubelet" Feb 24 10:43:15 crc kubenswrapper[4755]: I0224 10:43:15.402411 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41180: no serving certificate available for the kubelet" Feb 24 10:43:15 crc kubenswrapper[4755]: I0224 10:43:15.595655 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41196: no serving certificate available for the kubelet" Feb 24 10:43:15 crc kubenswrapper[4755]: I0224 10:43:15.954281 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41202: no serving certificate available for the kubelet" Feb 24 10:43:16 crc kubenswrapper[4755]: I0224 10:43:16.078926 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41206: no serving certificate available for the kubelet" Feb 24 10:43:16 crc kubenswrapper[4755]: I0224 10:43:16.631544 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41222: no serving certificate available for the kubelet" Feb 24 10:43:17 crc kubenswrapper[4755]: I0224 10:43:17.942891 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41226: no serving certificate available for the kubelet" Feb 24 10:43:18 crc kubenswrapper[4755]: I0224 10:43:18.224605 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41236: no serving certificate available for the kubelet" Feb 24 10:43:19 crc kubenswrapper[4755]: I0224 10:43:19.128836 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41238: no serving certificate available for the kubelet" Feb 24 10:43:20 crc kubenswrapper[4755]: I0224 10:43:20.534144 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41250: no serving certificate available for the kubelet" Feb 24 10:43:21 crc kubenswrapper[4755]: I0224 10:43:21.285018 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41266: no serving certificate available for the kubelet" Feb 24 10:43:21 crc kubenswrapper[4755]: I0224 10:43:21.695599 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:43:21 crc kubenswrapper[4755]: I0224 10:43:21.695683 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:43:21 crc kubenswrapper[4755]: I0224 10:43:21.695740 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 10:43:21 crc kubenswrapper[4755]: I0224 10:43:21.696556 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6973520b04ad15380b7bffb930a3b7d08f7087e679d5562e13d81f8ff1a623f"} pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 10:43:21 crc kubenswrapper[4755]: I0224 10:43:21.696663 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" containerID="cri-o://e6973520b04ad15380b7bffb930a3b7d08f7087e679d5562e13d81f8ff1a623f" gracePeriod=600 Feb 24 10:43:22 crc kubenswrapper[4755]: I0224 10:43:22.008634 4755 generic.go:334] "Generic (PLEG): container finished" podID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerID="e6973520b04ad15380b7bffb930a3b7d08f7087e679d5562e13d81f8ff1a623f" exitCode=0 Feb 24 10:43:22 crc kubenswrapper[4755]: I0224 10:43:22.008712 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerDied","Data":"e6973520b04ad15380b7bffb930a3b7d08f7087e679d5562e13d81f8ff1a623f"} Feb 24 10:43:22 crc kubenswrapper[4755]: I0224 10:43:22.009022 4755 scope.go:117] "RemoveContainer" containerID="3c9c0418cea393a784a89a60438561631146bddc7706f5e41bc6baa6443b20ed" Feb 24 10:43:22 crc kubenswrapper[4755]: I0224 10:43:22.183382 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41280: no serving certificate available for the kubelet" Feb 24 10:43:23 crc kubenswrapper[4755]: I0224 10:43:23.024338 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258"} Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.071629 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r97pf"] Feb 24 10:43:24 crc kubenswrapper[4755]: E0224 10:43:24.072267 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0150d1b-bae0-4fb1-bed4-aaf87ebec628" containerName="registry-server" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.072282 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0150d1b-bae0-4fb1-bed4-aaf87ebec628" containerName="registry-server" Feb 24 10:43:24 crc kubenswrapper[4755]: E0224 10:43:24.072300 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0150d1b-bae0-4fb1-bed4-aaf87ebec628" containerName="extract-utilities" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.072310 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0150d1b-bae0-4fb1-bed4-aaf87ebec628" containerName="extract-utilities" Feb 24 10:43:24 crc kubenswrapper[4755]: E0224 10:43:24.072341 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0150d1b-bae0-4fb1-bed4-aaf87ebec628" containerName="extract-content" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.072349 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0150d1b-bae0-4fb1-bed4-aaf87ebec628" containerName="extract-content" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.072563 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0150d1b-bae0-4fb1-bed4-aaf87ebec628" containerName="registry-server" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.073880 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.112887 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r97pf"] Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.134792 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72010696-96cb-43d6-a4e8-5613e282ccfb-utilities\") pod \"redhat-operators-r97pf\" (UID: \"72010696-96cb-43d6-a4e8-5613e282ccfb\") " pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.135037 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9fgj\" (UniqueName: \"kubernetes.io/projected/72010696-96cb-43d6-a4e8-5613e282ccfb-kube-api-access-q9fgj\") pod \"redhat-operators-r97pf\" (UID: \"72010696-96cb-43d6-a4e8-5613e282ccfb\") " pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.135280 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72010696-96cb-43d6-a4e8-5613e282ccfb-catalog-content\") pod \"redhat-operators-r97pf\" (UID: \"72010696-96cb-43d6-a4e8-5613e282ccfb\") " pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.237145 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72010696-96cb-43d6-a4e8-5613e282ccfb-catalog-content\") pod \"redhat-operators-r97pf\" (UID: \"72010696-96cb-43d6-a4e8-5613e282ccfb\") " pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.237214 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72010696-96cb-43d6-a4e8-5613e282ccfb-utilities\") pod \"redhat-operators-r97pf\" (UID: \"72010696-96cb-43d6-a4e8-5613e282ccfb\") " pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.237291 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9fgj\" (UniqueName: \"kubernetes.io/projected/72010696-96cb-43d6-a4e8-5613e282ccfb-kube-api-access-q9fgj\") pod \"redhat-operators-r97pf\" (UID: \"72010696-96cb-43d6-a4e8-5613e282ccfb\") " pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.237843 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72010696-96cb-43d6-a4e8-5613e282ccfb-catalog-content\") pod \"redhat-operators-r97pf\" (UID: \"72010696-96cb-43d6-a4e8-5613e282ccfb\") " pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.237877 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72010696-96cb-43d6-a4e8-5613e282ccfb-utilities\") pod \"redhat-operators-r97pf\" (UID: \"72010696-96cb-43d6-a4e8-5613e282ccfb\") " pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.259152 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9fgj\" (UniqueName: \"kubernetes.io/projected/72010696-96cb-43d6-a4e8-5613e282ccfb-kube-api-access-q9fgj\") pod \"redhat-operators-r97pf\" (UID: \"72010696-96cb-43d6-a4e8-5613e282ccfb\") " pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.328972 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39250: no serving certificate available for the kubelet" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.412220 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:24 crc kubenswrapper[4755]: I0224 10:43:24.852889 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r97pf"] Feb 24 10:43:25 crc kubenswrapper[4755]: I0224 10:43:25.043808 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r97pf" event={"ID":"72010696-96cb-43d6-a4e8-5613e282ccfb","Type":"ContainerStarted","Data":"2a4d0761eaad7e1b71c7960d63a1cd5f633b5de909e46ba3973c54e01a021e47"} Feb 24 10:43:25 crc kubenswrapper[4755]: I0224 10:43:25.044147 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r97pf" event={"ID":"72010696-96cb-43d6-a4e8-5613e282ccfb","Type":"ContainerStarted","Data":"9d6bb63219c0f4e119d0b60064f676f98bb070005d868264f05253a2d2401a86"} Feb 24 10:43:25 crc kubenswrapper[4755]: I0224 10:43:25.232748 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39260: no serving certificate available for the kubelet" Feb 24 10:43:25 crc kubenswrapper[4755]: I0224 10:43:25.680744 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39262: no serving certificate available for the kubelet" Feb 24 10:43:26 crc kubenswrapper[4755]: I0224 10:43:26.061638 4755 generic.go:334] "Generic (PLEG): container finished" podID="72010696-96cb-43d6-a4e8-5613e282ccfb" containerID="2a4d0761eaad7e1b71c7960d63a1cd5f633b5de909e46ba3973c54e01a021e47" exitCode=0 Feb 24 10:43:26 crc kubenswrapper[4755]: I0224 10:43:26.061734 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r97pf" event={"ID":"72010696-96cb-43d6-a4e8-5613e282ccfb","Type":"ContainerDied","Data":"2a4d0761eaad7e1b71c7960d63a1cd5f633b5de909e46ba3973c54e01a021e47"} Feb 24 10:43:26 crc kubenswrapper[4755]: I0224 10:43:26.065654 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 10:43:27 crc kubenswrapper[4755]: I0224 10:43:27.078897 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r97pf" event={"ID":"72010696-96cb-43d6-a4e8-5613e282ccfb","Type":"ContainerStarted","Data":"ff6917368f6ef654424557ebb6454077de8cf535e076a578eeabaa070dfb2d0d"} Feb 24 10:43:27 crc kubenswrapper[4755]: I0224 10:43:27.370306 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39278: no serving certificate available for the kubelet" Feb 24 10:43:28 crc kubenswrapper[4755]: I0224 10:43:28.089430 4755 generic.go:334] "Generic (PLEG): container finished" podID="72010696-96cb-43d6-a4e8-5613e282ccfb" containerID="ff6917368f6ef654424557ebb6454077de8cf535e076a578eeabaa070dfb2d0d" exitCode=0 Feb 24 10:43:28 crc kubenswrapper[4755]: I0224 10:43:28.089483 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r97pf" event={"ID":"72010696-96cb-43d6-a4e8-5613e282ccfb","Type":"ContainerDied","Data":"ff6917368f6ef654424557ebb6454077de8cf535e076a578eeabaa070dfb2d0d"} Feb 24 10:43:28 crc kubenswrapper[4755]: I0224 10:43:28.278029 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39282: no serving certificate available for the kubelet" Feb 24 10:43:29 crc kubenswrapper[4755]: I0224 10:43:29.102203 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r97pf" event={"ID":"72010696-96cb-43d6-a4e8-5613e282ccfb","Type":"ContainerStarted","Data":"6830d3d14394166868c2bba6e490af4d48f01ccfd3d30b31911d62b39eb87d95"} Feb 24 10:43:29 crc kubenswrapper[4755]: I0224 10:43:29.129824 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r97pf" podStartSLOduration=2.721258651 podStartE2EDuration="5.129799063s" podCreationTimestamp="2026-02-24 10:43:24 +0000 UTC" firstStartedPulling="2026-02-24 10:43:26.065001473 +0000 UTC m=+2910.521524046" lastFinishedPulling="2026-02-24 10:43:28.473541905 +0000 UTC m=+2912.930064458" observedRunningTime="2026-02-24 10:43:29.119291169 +0000 UTC m=+2913.575813752" watchObservedRunningTime="2026-02-24 10:43:29.129799063 +0000 UTC m=+2913.586321626" Feb 24 10:43:30 crc kubenswrapper[4755]: I0224 10:43:30.410707 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39290: no serving certificate available for the kubelet" Feb 24 10:43:31 crc kubenswrapper[4755]: I0224 10:43:31.337018 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39302: no serving certificate available for the kubelet" Feb 24 10:43:33 crc kubenswrapper[4755]: I0224 10:43:33.471552 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39316: no serving certificate available for the kubelet" Feb 24 10:43:34 crc kubenswrapper[4755]: I0224 10:43:34.391717 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36170: no serving certificate available for the kubelet" Feb 24 10:43:34 crc kubenswrapper[4755]: I0224 10:43:34.413325 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:34 crc kubenswrapper[4755]: I0224 10:43:34.413386 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:35 crc kubenswrapper[4755]: I0224 10:43:35.455428 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r97pf" podUID="72010696-96cb-43d6-a4e8-5613e282ccfb" containerName="registry-server" probeResult="failure" output=< Feb 24 10:43:35 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Feb 24 10:43:35 crc kubenswrapper[4755]: > Feb 24 10:43:35 crc kubenswrapper[4755]: I0224 10:43:35.949499 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36172: no serving certificate available for the kubelet" Feb 24 10:43:36 crc kubenswrapper[4755]: I0224 10:43:36.520698 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36180: no serving certificate available for the kubelet" Feb 24 10:43:37 crc kubenswrapper[4755]: I0224 10:43:37.433105 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36186: no serving certificate available for the kubelet" Feb 24 10:43:39 crc kubenswrapper[4755]: I0224 10:43:39.574598 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36192: no serving certificate available for the kubelet" Feb 24 10:43:40 crc kubenswrapper[4755]: I0224 10:43:40.479541 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36202: no serving certificate available for the kubelet" Feb 24 10:43:42 crc kubenswrapper[4755]: I0224 10:43:42.619227 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36218: no serving certificate available for the kubelet" Feb 24 10:43:43 crc kubenswrapper[4755]: I0224 10:43:43.538205 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36224: no serving certificate available for the kubelet" Feb 24 10:43:44 crc kubenswrapper[4755]: I0224 10:43:44.476601 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:44 crc kubenswrapper[4755]: I0224 10:43:44.556575 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:44 crc kubenswrapper[4755]: I0224 10:43:44.719328 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r97pf"] Feb 24 10:43:45 crc kubenswrapper[4755]: I0224 10:43:45.721224 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38488: no serving certificate available for the kubelet" Feb 24 10:43:46 crc kubenswrapper[4755]: I0224 10:43:46.279284 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r97pf" podUID="72010696-96cb-43d6-a4e8-5613e282ccfb" containerName="registry-server" containerID="cri-o://6830d3d14394166868c2bba6e490af4d48f01ccfd3d30b31911d62b39eb87d95" gracePeriod=2 Feb 24 10:43:46 crc kubenswrapper[4755]: I0224 10:43:46.584637 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38504: no serving certificate available for the kubelet" Feb 24 10:43:46 crc kubenswrapper[4755]: I0224 10:43:46.713893 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:46 crc kubenswrapper[4755]: I0224 10:43:46.853291 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72010696-96cb-43d6-a4e8-5613e282ccfb-utilities\") pod \"72010696-96cb-43d6-a4e8-5613e282ccfb\" (UID: \"72010696-96cb-43d6-a4e8-5613e282ccfb\") " Feb 24 10:43:46 crc kubenswrapper[4755]: I0224 10:43:46.853392 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9fgj\" (UniqueName: \"kubernetes.io/projected/72010696-96cb-43d6-a4e8-5613e282ccfb-kube-api-access-q9fgj\") pod \"72010696-96cb-43d6-a4e8-5613e282ccfb\" (UID: \"72010696-96cb-43d6-a4e8-5613e282ccfb\") " Feb 24 10:43:46 crc kubenswrapper[4755]: I0224 10:43:46.853479 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72010696-96cb-43d6-a4e8-5613e282ccfb-catalog-content\") pod \"72010696-96cb-43d6-a4e8-5613e282ccfb\" (UID: \"72010696-96cb-43d6-a4e8-5613e282ccfb\") " Feb 24 10:43:46 crc kubenswrapper[4755]: I0224 10:43:46.854210 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72010696-96cb-43d6-a4e8-5613e282ccfb-utilities" (OuterVolumeSpecName: "utilities") pod "72010696-96cb-43d6-a4e8-5613e282ccfb" (UID: "72010696-96cb-43d6-a4e8-5613e282ccfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:43:46 crc kubenswrapper[4755]: I0224 10:43:46.856029 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72010696-96cb-43d6-a4e8-5613e282ccfb-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:43:46 crc kubenswrapper[4755]: I0224 10:43:46.863324 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72010696-96cb-43d6-a4e8-5613e282ccfb-kube-api-access-q9fgj" (OuterVolumeSpecName: "kube-api-access-q9fgj") pod "72010696-96cb-43d6-a4e8-5613e282ccfb" (UID: "72010696-96cb-43d6-a4e8-5613e282ccfb"). InnerVolumeSpecName "kube-api-access-q9fgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:43:46 crc kubenswrapper[4755]: I0224 10:43:46.957879 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9fgj\" (UniqueName: \"kubernetes.io/projected/72010696-96cb-43d6-a4e8-5613e282ccfb-kube-api-access-q9fgj\") on node \"crc\" DevicePath \"\"" Feb 24 10:43:46 crc kubenswrapper[4755]: I0224 10:43:46.993187 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72010696-96cb-43d6-a4e8-5613e282ccfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72010696-96cb-43d6-a4e8-5613e282ccfb" (UID: "72010696-96cb-43d6-a4e8-5613e282ccfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.059939 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72010696-96cb-43d6-a4e8-5613e282ccfb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.295240 4755 generic.go:334] "Generic (PLEG): container finished" podID="72010696-96cb-43d6-a4e8-5613e282ccfb" containerID="6830d3d14394166868c2bba6e490af4d48f01ccfd3d30b31911d62b39eb87d95" exitCode=0 Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.295294 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r97pf" Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.295294 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r97pf" event={"ID":"72010696-96cb-43d6-a4e8-5613e282ccfb","Type":"ContainerDied","Data":"6830d3d14394166868c2bba6e490af4d48f01ccfd3d30b31911d62b39eb87d95"} Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.295481 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r97pf" event={"ID":"72010696-96cb-43d6-a4e8-5613e282ccfb","Type":"ContainerDied","Data":"9d6bb63219c0f4e119d0b60064f676f98bb070005d868264f05253a2d2401a86"} Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.295532 4755 scope.go:117] "RemoveContainer" containerID="6830d3d14394166868c2bba6e490af4d48f01ccfd3d30b31911d62b39eb87d95" Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.334298 4755 scope.go:117] "RemoveContainer" containerID="ff6917368f6ef654424557ebb6454077de8cf535e076a578eeabaa070dfb2d0d" Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.355946 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r97pf"] Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.380139 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r97pf"] Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.380425 4755 scope.go:117] "RemoveContainer" containerID="2a4d0761eaad7e1b71c7960d63a1cd5f633b5de909e46ba3973c54e01a021e47" Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.468716 4755 scope.go:117] "RemoveContainer" containerID="6830d3d14394166868c2bba6e490af4d48f01ccfd3d30b31911d62b39eb87d95" Feb 24 10:43:47 crc kubenswrapper[4755]: E0224 10:43:47.470055 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6830d3d14394166868c2bba6e490af4d48f01ccfd3d30b31911d62b39eb87d95\": container with ID starting with 6830d3d14394166868c2bba6e490af4d48f01ccfd3d30b31911d62b39eb87d95 not found: ID does not exist" containerID="6830d3d14394166868c2bba6e490af4d48f01ccfd3d30b31911d62b39eb87d95" Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.470115 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6830d3d14394166868c2bba6e490af4d48f01ccfd3d30b31911d62b39eb87d95"} err="failed to get container status \"6830d3d14394166868c2bba6e490af4d48f01ccfd3d30b31911d62b39eb87d95\": rpc error: code = NotFound desc = could not find container \"6830d3d14394166868c2bba6e490af4d48f01ccfd3d30b31911d62b39eb87d95\": container with ID starting with 6830d3d14394166868c2bba6e490af4d48f01ccfd3d30b31911d62b39eb87d95 not found: ID does not exist" Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.470139 4755 scope.go:117] "RemoveContainer" containerID="ff6917368f6ef654424557ebb6454077de8cf535e076a578eeabaa070dfb2d0d" Feb 24 10:43:47 crc kubenswrapper[4755]: E0224 10:43:47.472560 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff6917368f6ef654424557ebb6454077de8cf535e076a578eeabaa070dfb2d0d\": container with ID starting with ff6917368f6ef654424557ebb6454077de8cf535e076a578eeabaa070dfb2d0d not found: ID does not exist" containerID="ff6917368f6ef654424557ebb6454077de8cf535e076a578eeabaa070dfb2d0d" Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.472591 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff6917368f6ef654424557ebb6454077de8cf535e076a578eeabaa070dfb2d0d"} err="failed to get container status \"ff6917368f6ef654424557ebb6454077de8cf535e076a578eeabaa070dfb2d0d\": rpc error: code = NotFound desc = could not find container \"ff6917368f6ef654424557ebb6454077de8cf535e076a578eeabaa070dfb2d0d\": container with ID starting with ff6917368f6ef654424557ebb6454077de8cf535e076a578eeabaa070dfb2d0d not found: ID does not exist" Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.472612 4755 scope.go:117] "RemoveContainer" containerID="2a4d0761eaad7e1b71c7960d63a1cd5f633b5de909e46ba3973c54e01a021e47" Feb 24 10:43:47 crc kubenswrapper[4755]: E0224 10:43:47.473379 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a4d0761eaad7e1b71c7960d63a1cd5f633b5de909e46ba3973c54e01a021e47\": container with ID starting with 2a4d0761eaad7e1b71c7960d63a1cd5f633b5de909e46ba3973c54e01a021e47 not found: ID does not exist" containerID="2a4d0761eaad7e1b71c7960d63a1cd5f633b5de909e46ba3973c54e01a021e47" Feb 24 10:43:47 crc kubenswrapper[4755]: I0224 10:43:47.473401 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a4d0761eaad7e1b71c7960d63a1cd5f633b5de909e46ba3973c54e01a021e47"} err="failed to get container status \"2a4d0761eaad7e1b71c7960d63a1cd5f633b5de909e46ba3973c54e01a021e47\": rpc error: code = NotFound desc = could not find container \"2a4d0761eaad7e1b71c7960d63a1cd5f633b5de909e46ba3973c54e01a021e47\": container with ID starting with 2a4d0761eaad7e1b71c7960d63a1cd5f633b5de909e46ba3973c54e01a021e47 not found: ID does not exist" Feb 24 10:43:48 crc kubenswrapper[4755]: I0224 10:43:48.333665 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72010696-96cb-43d6-a4e8-5613e282ccfb" path="/var/lib/kubelet/pods/72010696-96cb-43d6-a4e8-5613e282ccfb/volumes" Feb 24 10:43:48 crc kubenswrapper[4755]: I0224 10:43:48.781372 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38512: no serving certificate available for the kubelet" Feb 24 10:43:49 crc kubenswrapper[4755]: I0224 10:43:49.640755 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38528: no serving certificate available for the kubelet" Feb 24 10:43:51 crc kubenswrapper[4755]: I0224 10:43:51.841303 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38538: no serving certificate available for the kubelet" Feb 24 10:43:52 crc kubenswrapper[4755]: I0224 10:43:52.681319 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38544: no serving certificate available for the kubelet" Feb 24 10:43:54 crc kubenswrapper[4755]: I0224 10:43:54.903607 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56984: no serving certificate available for the kubelet" Feb 24 10:43:55 crc kubenswrapper[4755]: I0224 10:43:55.732331 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56994: no serving certificate available for the kubelet" Feb 24 10:43:56 crc kubenswrapper[4755]: I0224 10:43:56.460974 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57010: no serving certificate available for the kubelet" Feb 24 10:43:57 crc kubenswrapper[4755]: I0224 10:43:57.949612 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57020: no serving certificate available for the kubelet" Feb 24 10:43:58 crc kubenswrapper[4755]: I0224 10:43:58.783512 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57026: no serving certificate available for the kubelet" Feb 24 10:44:00 crc kubenswrapper[4755]: I0224 10:44:00.997011 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57042: no serving certificate available for the kubelet" Feb 24 10:44:01 crc kubenswrapper[4755]: I0224 10:44:01.820735 4755 ???:1] "http: TLS handshake error from 192.168.126.11:57052: no serving certificate available for the kubelet" Feb 24 10:44:04 crc kubenswrapper[4755]: I0224 10:44:04.034382 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46754: no serving certificate available for the kubelet" Feb 24 10:44:04 crc kubenswrapper[4755]: I0224 10:44:04.875814 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46758: no serving certificate available for the kubelet" Feb 24 10:44:07 crc kubenswrapper[4755]: I0224 10:44:07.079686 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46774: no serving certificate available for the kubelet" Feb 24 10:44:07 crc kubenswrapper[4755]: I0224 10:44:07.916357 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46788: no serving certificate available for the kubelet" Feb 24 10:44:10 crc kubenswrapper[4755]: I0224 10:44:10.136316 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46794: no serving certificate available for the kubelet" Feb 24 10:44:10 crc kubenswrapper[4755]: I0224 10:44:10.972117 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46796: no serving certificate available for the kubelet" Feb 24 10:44:13 crc kubenswrapper[4755]: I0224 10:44:13.180458 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46800: no serving certificate available for the kubelet" Feb 24 10:44:14 crc kubenswrapper[4755]: I0224 10:44:14.030440 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55230: no serving certificate available for the kubelet" Feb 24 10:44:16 crc kubenswrapper[4755]: I0224 10:44:16.242924 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55242: no serving certificate available for the kubelet" Feb 24 10:44:17 crc kubenswrapper[4755]: I0224 10:44:17.093831 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55254: no serving certificate available for the kubelet" Feb 24 10:44:19 crc kubenswrapper[4755]: I0224 10:44:19.308225 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55260: no serving certificate available for the kubelet" Feb 24 10:44:20 crc kubenswrapper[4755]: I0224 10:44:20.137547 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55264: no serving certificate available for the kubelet" Feb 24 10:44:22 crc kubenswrapper[4755]: I0224 10:44:22.355212 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55278: no serving certificate available for the kubelet" Feb 24 10:44:23 crc kubenswrapper[4755]: I0224 10:44:23.183201 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55288: no serving certificate available for the kubelet" Feb 24 10:44:25 crc kubenswrapper[4755]: I0224 10:44:25.398360 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34270: no serving certificate available for the kubelet" Feb 24 10:44:26 crc kubenswrapper[4755]: I0224 10:44:26.252389 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34284: no serving certificate available for the kubelet" Feb 24 10:44:28 crc kubenswrapper[4755]: I0224 10:44:28.452985 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34294: no serving certificate available for the kubelet" Feb 24 10:44:29 crc kubenswrapper[4755]: I0224 10:44:29.298616 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34306: no serving certificate available for the kubelet" Feb 24 10:44:31 crc kubenswrapper[4755]: I0224 10:44:31.497208 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34318: no serving certificate available for the kubelet" Feb 24 10:44:32 crc kubenswrapper[4755]: I0224 10:44:32.361447 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34332: no serving certificate available for the kubelet" Feb 24 10:44:34 crc kubenswrapper[4755]: I0224 10:44:34.538317 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38202: no serving certificate available for the kubelet" Feb 24 10:44:35 crc kubenswrapper[4755]: I0224 10:44:35.419929 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38206: no serving certificate available for the kubelet" Feb 24 10:44:37 crc kubenswrapper[4755]: I0224 10:44:37.513183 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38210: no serving certificate available for the kubelet" Feb 24 10:44:37 crc kubenswrapper[4755]: I0224 10:44:37.572250 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38220: no serving certificate available for the kubelet" Feb 24 10:44:38 crc kubenswrapper[4755]: I0224 10:44:38.471701 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38228: no serving certificate available for the kubelet" Feb 24 10:44:40 crc kubenswrapper[4755]: I0224 10:44:40.622903 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38230: no serving certificate available for the kubelet" Feb 24 10:44:41 crc kubenswrapper[4755]: I0224 10:44:41.506695 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38240: no serving certificate available for the kubelet" Feb 24 10:44:43 crc kubenswrapper[4755]: I0224 10:44:43.680178 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37338: no serving certificate available for the kubelet" Feb 24 10:44:44 crc kubenswrapper[4755]: I0224 10:44:44.542176 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37354: no serving certificate available for the kubelet" Feb 24 10:44:46 crc kubenswrapper[4755]: I0224 10:44:46.722087 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37370: no serving certificate available for the kubelet" Feb 24 10:44:47 crc kubenswrapper[4755]: I0224 10:44:47.585244 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37384: no serving certificate available for the kubelet" Feb 24 10:44:49 crc kubenswrapper[4755]: I0224 10:44:49.779007 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37396: no serving certificate available for the kubelet" Feb 24 10:44:50 crc kubenswrapper[4755]: I0224 10:44:50.620930 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37402: no serving certificate available for the kubelet" Feb 24 10:44:52 crc kubenswrapper[4755]: I0224 10:44:52.838240 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37410: no serving certificate available for the kubelet" Feb 24 10:44:53 crc kubenswrapper[4755]: I0224 10:44:53.658696 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37424: no serving certificate available for the kubelet" Feb 24 10:44:55 crc kubenswrapper[4755]: I0224 10:44:55.907020 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44568: no serving certificate available for the kubelet" Feb 24 10:44:56 crc kubenswrapper[4755]: I0224 10:44:56.707486 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44580: no serving certificate available for the kubelet" Feb 24 10:44:58 crc kubenswrapper[4755]: I0224 10:44:58.940819 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44592: no serving certificate available for the kubelet" Feb 24 10:44:59 crc kubenswrapper[4755]: I0224 10:44:59.755413 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44596: no serving certificate available for the kubelet" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.169495 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t"] Feb 24 10:45:00 crc kubenswrapper[4755]: E0224 10:45:00.169911 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72010696-96cb-43d6-a4e8-5613e282ccfb" containerName="extract-content" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.169925 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="72010696-96cb-43d6-a4e8-5613e282ccfb" containerName="extract-content" Feb 24 10:45:00 crc kubenswrapper[4755]: E0224 10:45:00.169979 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72010696-96cb-43d6-a4e8-5613e282ccfb" containerName="extract-utilities" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.169988 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="72010696-96cb-43d6-a4e8-5613e282ccfb" containerName="extract-utilities" Feb 24 10:45:00 crc kubenswrapper[4755]: E0224 10:45:00.170001 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72010696-96cb-43d6-a4e8-5613e282ccfb" containerName="registry-server" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.170010 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="72010696-96cb-43d6-a4e8-5613e282ccfb" containerName="registry-server" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.170242 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="72010696-96cb-43d6-a4e8-5613e282ccfb" containerName="registry-server" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.170877 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.172847 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.173251 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.184486 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t"] Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.284455 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1134224d-ec01-4227-83f4-dc8fdbad2375-config-volume\") pod \"collect-profiles-29532165-6fv5t\" (UID: \"1134224d-ec01-4227-83f4-dc8fdbad2375\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.284509 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c55p6\" (UniqueName: \"kubernetes.io/projected/1134224d-ec01-4227-83f4-dc8fdbad2375-kube-api-access-c55p6\") pod \"collect-profiles-29532165-6fv5t\" (UID: \"1134224d-ec01-4227-83f4-dc8fdbad2375\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.284549 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1134224d-ec01-4227-83f4-dc8fdbad2375-secret-volume\") pod \"collect-profiles-29532165-6fv5t\" (UID: \"1134224d-ec01-4227-83f4-dc8fdbad2375\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.385640 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1134224d-ec01-4227-83f4-dc8fdbad2375-config-volume\") pod \"collect-profiles-29532165-6fv5t\" (UID: \"1134224d-ec01-4227-83f4-dc8fdbad2375\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.385707 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c55p6\" (UniqueName: \"kubernetes.io/projected/1134224d-ec01-4227-83f4-dc8fdbad2375-kube-api-access-c55p6\") pod \"collect-profiles-29532165-6fv5t\" (UID: \"1134224d-ec01-4227-83f4-dc8fdbad2375\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.385756 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1134224d-ec01-4227-83f4-dc8fdbad2375-secret-volume\") pod \"collect-profiles-29532165-6fv5t\" (UID: \"1134224d-ec01-4227-83f4-dc8fdbad2375\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.387774 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1134224d-ec01-4227-83f4-dc8fdbad2375-config-volume\") pod \"collect-profiles-29532165-6fv5t\" (UID: \"1134224d-ec01-4227-83f4-dc8fdbad2375\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.398001 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1134224d-ec01-4227-83f4-dc8fdbad2375-secret-volume\") pod \"collect-profiles-29532165-6fv5t\" (UID: \"1134224d-ec01-4227-83f4-dc8fdbad2375\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.402161 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c55p6\" (UniqueName: \"kubernetes.io/projected/1134224d-ec01-4227-83f4-dc8fdbad2375-kube-api-access-c55p6\") pod \"collect-profiles-29532165-6fv5t\" (UID: \"1134224d-ec01-4227-83f4-dc8fdbad2375\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.489887 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" Feb 24 10:45:00 crc kubenswrapper[4755]: I0224 10:45:00.915914 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t"] Feb 24 10:45:00 crc kubenswrapper[4755]: W0224 10:45:00.924695 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1134224d_ec01_4227_83f4_dc8fdbad2375.slice/crio-348288896e5f93875289a3bbf7f794e80a6094c7cf1e76cc676d91697b7dc072 WatchSource:0}: Error finding container 348288896e5f93875289a3bbf7f794e80a6094c7cf1e76cc676d91697b7dc072: Status 404 returned error can't find the container with id 348288896e5f93875289a3bbf7f794e80a6094c7cf1e76cc676d91697b7dc072 Feb 24 10:45:01 crc kubenswrapper[4755]: I0224 10:45:01.038749 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" event={"ID":"1134224d-ec01-4227-83f4-dc8fdbad2375","Type":"ContainerStarted","Data":"348288896e5f93875289a3bbf7f794e80a6094c7cf1e76cc676d91697b7dc072"} Feb 24 10:45:01 crc kubenswrapper[4755]: I0224 10:45:01.996625 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44598: no serving certificate available for the kubelet" Feb 24 10:45:02 crc kubenswrapper[4755]: I0224 10:45:02.046493 4755 generic.go:334] "Generic (PLEG): container finished" podID="1134224d-ec01-4227-83f4-dc8fdbad2375" containerID="8112274814cd70e513d387cfd178260a2fe71dfec4b850d74b9a1f7a70aeafaf" exitCode=0 Feb 24 10:45:02 crc kubenswrapper[4755]: I0224 10:45:02.046537 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" event={"ID":"1134224d-ec01-4227-83f4-dc8fdbad2375","Type":"ContainerDied","Data":"8112274814cd70e513d387cfd178260a2fe71dfec4b850d74b9a1f7a70aeafaf"} Feb 24 10:45:02 crc kubenswrapper[4755]: I0224 10:45:02.810664 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44614: no serving certificate available for the kubelet" Feb 24 10:45:03 crc kubenswrapper[4755]: I0224 10:45:03.374223 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" Feb 24 10:45:03 crc kubenswrapper[4755]: I0224 10:45:03.543017 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1134224d-ec01-4227-83f4-dc8fdbad2375-config-volume\") pod \"1134224d-ec01-4227-83f4-dc8fdbad2375\" (UID: \"1134224d-ec01-4227-83f4-dc8fdbad2375\") " Feb 24 10:45:03 crc kubenswrapper[4755]: I0224 10:45:03.543303 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1134224d-ec01-4227-83f4-dc8fdbad2375-secret-volume\") pod \"1134224d-ec01-4227-83f4-dc8fdbad2375\" (UID: \"1134224d-ec01-4227-83f4-dc8fdbad2375\") " Feb 24 10:45:03 crc kubenswrapper[4755]: I0224 10:45:03.543908 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1134224d-ec01-4227-83f4-dc8fdbad2375-config-volume" (OuterVolumeSpecName: "config-volume") pod "1134224d-ec01-4227-83f4-dc8fdbad2375" (UID: "1134224d-ec01-4227-83f4-dc8fdbad2375"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 10:45:03 crc kubenswrapper[4755]: I0224 10:45:03.544482 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c55p6\" (UniqueName: \"kubernetes.io/projected/1134224d-ec01-4227-83f4-dc8fdbad2375-kube-api-access-c55p6\") pod \"1134224d-ec01-4227-83f4-dc8fdbad2375\" (UID: \"1134224d-ec01-4227-83f4-dc8fdbad2375\") " Feb 24 10:45:03 crc kubenswrapper[4755]: I0224 10:45:03.545132 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1134224d-ec01-4227-83f4-dc8fdbad2375-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 10:45:03 crc kubenswrapper[4755]: I0224 10:45:03.550972 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1134224d-ec01-4227-83f4-dc8fdbad2375-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1134224d-ec01-4227-83f4-dc8fdbad2375" (UID: "1134224d-ec01-4227-83f4-dc8fdbad2375"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 10:45:03 crc kubenswrapper[4755]: I0224 10:45:03.552310 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1134224d-ec01-4227-83f4-dc8fdbad2375-kube-api-access-c55p6" (OuterVolumeSpecName: "kube-api-access-c55p6") pod "1134224d-ec01-4227-83f4-dc8fdbad2375" (UID: "1134224d-ec01-4227-83f4-dc8fdbad2375"). InnerVolumeSpecName "kube-api-access-c55p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:45:03 crc kubenswrapper[4755]: I0224 10:45:03.646470 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c55p6\" (UniqueName: \"kubernetes.io/projected/1134224d-ec01-4227-83f4-dc8fdbad2375-kube-api-access-c55p6\") on node \"crc\" DevicePath \"\"" Feb 24 10:45:03 crc kubenswrapper[4755]: I0224 10:45:03.646523 4755 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1134224d-ec01-4227-83f4-dc8fdbad2375-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 10:45:04 crc kubenswrapper[4755]: I0224 10:45:04.068735 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" event={"ID":"1134224d-ec01-4227-83f4-dc8fdbad2375","Type":"ContainerDied","Data":"348288896e5f93875289a3bbf7f794e80a6094c7cf1e76cc676d91697b7dc072"} Feb 24 10:45:04 crc kubenswrapper[4755]: I0224 10:45:04.069121 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="348288896e5f93875289a3bbf7f794e80a6094c7cf1e76cc676d91697b7dc072" Feb 24 10:45:04 crc kubenswrapper[4755]: I0224 10:45:04.068784 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532165-6fv5t" Feb 24 10:45:04 crc kubenswrapper[4755]: I0224 10:45:04.455882 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb"] Feb 24 10:45:04 crc kubenswrapper[4755]: I0224 10:45:04.461528 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532120-98jrb"] Feb 24 10:45:05 crc kubenswrapper[4755]: I0224 10:45:05.048788 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37798: no serving certificate available for the kubelet" Feb 24 10:45:05 crc kubenswrapper[4755]: I0224 10:45:05.854920 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37802: no serving certificate available for the kubelet" Feb 24 10:45:06 crc kubenswrapper[4755]: I0224 10:45:06.334417 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b86f7829-7ff9-4702-89d8-081b0997310e" path="/var/lib/kubelet/pods/b86f7829-7ff9-4702-89d8-081b0997310e/volumes" Feb 24 10:45:08 crc kubenswrapper[4755]: I0224 10:45:08.094953 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37818: no serving certificate available for the kubelet" Feb 24 10:45:08 crc kubenswrapper[4755]: I0224 10:45:08.897738 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37826: no serving certificate available for the kubelet" Feb 24 10:45:11 crc kubenswrapper[4755]: I0224 10:45:11.141717 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37840: no serving certificate available for the kubelet" Feb 24 10:45:11 crc kubenswrapper[4755]: I0224 10:45:11.938543 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37852: no serving certificate available for the kubelet" Feb 24 10:45:14 crc kubenswrapper[4755]: I0224 10:45:14.198386 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52290: no serving certificate available for the kubelet" Feb 24 10:45:15 crc kubenswrapper[4755]: I0224 10:45:15.001014 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52300: no serving certificate available for the kubelet" Feb 24 10:45:17 crc kubenswrapper[4755]: I0224 10:45:17.251502 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52312: no serving certificate available for the kubelet" Feb 24 10:45:18 crc kubenswrapper[4755]: I0224 10:45:18.042680 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52314: no serving certificate available for the kubelet" Feb 24 10:45:20 crc kubenswrapper[4755]: I0224 10:45:20.298140 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52328: no serving certificate available for the kubelet" Feb 24 10:45:21 crc kubenswrapper[4755]: I0224 10:45:21.089435 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52344: no serving certificate available for the kubelet" Feb 24 10:45:23 crc kubenswrapper[4755]: I0224 10:45:23.356256 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52348: no serving certificate available for the kubelet" Feb 24 10:45:24 crc kubenswrapper[4755]: I0224 10:45:24.141365 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45210: no serving certificate available for the kubelet" Feb 24 10:45:26 crc kubenswrapper[4755]: I0224 10:45:26.394101 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45220: no serving certificate available for the kubelet" Feb 24 10:45:27 crc kubenswrapper[4755]: I0224 10:45:27.221993 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45224: no serving certificate available for the kubelet" Feb 24 10:45:29 crc kubenswrapper[4755]: I0224 10:45:29.438523 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45236: no serving certificate available for the kubelet" Feb 24 10:45:30 crc kubenswrapper[4755]: I0224 10:45:30.264734 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45244: no serving certificate available for the kubelet" Feb 24 10:45:32 crc kubenswrapper[4755]: I0224 10:45:32.515994 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45260: no serving certificate available for the kubelet" Feb 24 10:45:33 crc kubenswrapper[4755]: I0224 10:45:33.318263 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45268: no serving certificate available for the kubelet" Feb 24 10:45:35 crc kubenswrapper[4755]: I0224 10:45:35.563666 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55736: no serving certificate available for the kubelet" Feb 24 10:45:36 crc kubenswrapper[4755]: I0224 10:45:36.373659 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55750: no serving certificate available for the kubelet" Feb 24 10:45:38 crc kubenswrapper[4755]: I0224 10:45:38.617383 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55762: no serving certificate available for the kubelet" Feb 24 10:45:39 crc kubenswrapper[4755]: I0224 10:45:39.496675 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55778: no serving certificate available for the kubelet" Feb 24 10:45:41 crc kubenswrapper[4755]: I0224 10:45:41.680041 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55792: no serving certificate available for the kubelet" Feb 24 10:45:42 crc kubenswrapper[4755]: I0224 10:45:42.552869 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55802: no serving certificate available for the kubelet" Feb 24 10:45:44 crc kubenswrapper[4755]: I0224 10:45:44.736029 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43390: no serving certificate available for the kubelet" Feb 24 10:45:45 crc kubenswrapper[4755]: I0224 10:45:45.590203 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43392: no serving certificate available for the kubelet" Feb 24 10:45:47 crc kubenswrapper[4755]: I0224 10:45:47.786902 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43404: no serving certificate available for the kubelet" Feb 24 10:45:48 crc kubenswrapper[4755]: I0224 10:45:48.627103 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43416: no serving certificate available for the kubelet" Feb 24 10:45:50 crc kubenswrapper[4755]: I0224 10:45:50.834308 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43424: no serving certificate available for the kubelet" Feb 24 10:45:51 crc kubenswrapper[4755]: I0224 10:45:51.674796 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43438: no serving certificate available for the kubelet" Feb 24 10:45:51 crc kubenswrapper[4755]: I0224 10:45:51.694787 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:45:51 crc kubenswrapper[4755]: I0224 10:45:51.694855 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:45:53 crc kubenswrapper[4755]: I0224 10:45:53.881413 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47576: no serving certificate available for the kubelet" Feb 24 10:45:54 crc kubenswrapper[4755]: I0224 10:45:54.796519 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47590: no serving certificate available for the kubelet" Feb 24 10:45:56 crc kubenswrapper[4755]: I0224 10:45:56.932973 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47592: no serving certificate available for the kubelet" Feb 24 10:45:57 crc kubenswrapper[4755]: I0224 10:45:57.859516 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47602: no serving certificate available for the kubelet" Feb 24 10:45:59 crc kubenswrapper[4755]: I0224 10:45:59.429934 4755 scope.go:117] "RemoveContainer" containerID="4a6abd801e87691fbf2ee97f9294fc03074ab69b5b530a925669bf5c4f9d9c1f" Feb 24 10:45:59 crc kubenswrapper[4755]: I0224 10:45:59.484021 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47604: no serving certificate available for the kubelet" Feb 24 10:45:59 crc kubenswrapper[4755]: I0224 10:45:59.991907 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47610: no serving certificate available for the kubelet" Feb 24 10:46:00 crc kubenswrapper[4755]: I0224 10:46:00.917969 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47612: no serving certificate available for the kubelet" Feb 24 10:46:03 crc kubenswrapper[4755]: I0224 10:46:03.040908 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47616: no serving certificate available for the kubelet" Feb 24 10:46:03 crc kubenswrapper[4755]: I0224 10:46:03.973273 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48940: no serving certificate available for the kubelet" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.081748 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48948: no serving certificate available for the kubelet" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.510819 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6zw75"] Feb 24 10:46:06 crc kubenswrapper[4755]: E0224 10:46:06.511212 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1134224d-ec01-4227-83f4-dc8fdbad2375" containerName="collect-profiles" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.511229 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1134224d-ec01-4227-83f4-dc8fdbad2375" containerName="collect-profiles" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.511392 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="1134224d-ec01-4227-83f4-dc8fdbad2375" containerName="collect-profiles" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.512611 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.537159 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zw75"] Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.580548 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152c9ef0-50a1-4475-b384-997c53e68a21-utilities\") pod \"community-operators-6zw75\" (UID: \"152c9ef0-50a1-4475-b384-997c53e68a21\") " pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.580703 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7d86\" (UniqueName: \"kubernetes.io/projected/152c9ef0-50a1-4475-b384-997c53e68a21-kube-api-access-n7d86\") pod \"community-operators-6zw75\" (UID: \"152c9ef0-50a1-4475-b384-997c53e68a21\") " pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.580777 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152c9ef0-50a1-4475-b384-997c53e68a21-catalog-content\") pod \"community-operators-6zw75\" (UID: \"152c9ef0-50a1-4475-b384-997c53e68a21\") " pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.682049 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152c9ef0-50a1-4475-b384-997c53e68a21-catalog-content\") pod \"community-operators-6zw75\" (UID: \"152c9ef0-50a1-4475-b384-997c53e68a21\") " pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.682222 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152c9ef0-50a1-4475-b384-997c53e68a21-utilities\") pod \"community-operators-6zw75\" (UID: \"152c9ef0-50a1-4475-b384-997c53e68a21\") " pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.682286 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7d86\" (UniqueName: \"kubernetes.io/projected/152c9ef0-50a1-4475-b384-997c53e68a21-kube-api-access-n7d86\") pod \"community-operators-6zw75\" (UID: \"152c9ef0-50a1-4475-b384-997c53e68a21\") " pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.682682 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152c9ef0-50a1-4475-b384-997c53e68a21-catalog-content\") pod \"community-operators-6zw75\" (UID: \"152c9ef0-50a1-4475-b384-997c53e68a21\") " pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.682718 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152c9ef0-50a1-4475-b384-997c53e68a21-utilities\") pod \"community-operators-6zw75\" (UID: \"152c9ef0-50a1-4475-b384-997c53e68a21\") " pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.708909 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7d86\" (UniqueName: \"kubernetes.io/projected/152c9ef0-50a1-4475-b384-997c53e68a21-kube-api-access-n7d86\") pod \"community-operators-6zw75\" (UID: \"152c9ef0-50a1-4475-b384-997c53e68a21\") " pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:06 crc kubenswrapper[4755]: I0224 10:46:06.836771 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:07 crc kubenswrapper[4755]: I0224 10:46:07.023659 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48964: no serving certificate available for the kubelet" Feb 24 10:46:07 crc kubenswrapper[4755]: I0224 10:46:07.339329 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zw75"] Feb 24 10:46:07 crc kubenswrapper[4755]: I0224 10:46:07.735814 4755 generic.go:334] "Generic (PLEG): container finished" podID="152c9ef0-50a1-4475-b384-997c53e68a21" containerID="9987d7c81ee63c74dcc6cf05fcb7166550ccb483b64ef16e78a4573b2e77d36d" exitCode=0 Feb 24 10:46:07 crc kubenswrapper[4755]: I0224 10:46:07.735973 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zw75" event={"ID":"152c9ef0-50a1-4475-b384-997c53e68a21","Type":"ContainerDied","Data":"9987d7c81ee63c74dcc6cf05fcb7166550ccb483b64ef16e78a4573b2e77d36d"} Feb 24 10:46:07 crc kubenswrapper[4755]: I0224 10:46:07.736146 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zw75" event={"ID":"152c9ef0-50a1-4475-b384-997c53e68a21","Type":"ContainerStarted","Data":"d12de66636c8eb1f2cf43d08447e5d3acee6f61140b6d38ba3bb2e1e5ed38fba"} Feb 24 10:46:08 crc kubenswrapper[4755]: I0224 10:46:08.747145 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zw75" event={"ID":"152c9ef0-50a1-4475-b384-997c53e68a21","Type":"ContainerStarted","Data":"f3c47a4d3995724da7f5646688d98947245b1f24c04b9dc007770aab847fd121"} Feb 24 10:46:09 crc kubenswrapper[4755]: I0224 10:46:09.141678 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48980: no serving certificate available for the kubelet" Feb 24 10:46:09 crc kubenswrapper[4755]: I0224 10:46:09.761831 4755 generic.go:334] "Generic (PLEG): container finished" podID="152c9ef0-50a1-4475-b384-997c53e68a21" containerID="f3c47a4d3995724da7f5646688d98947245b1f24c04b9dc007770aab847fd121" exitCode=0 Feb 24 10:46:09 crc kubenswrapper[4755]: I0224 10:46:09.761929 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zw75" event={"ID":"152c9ef0-50a1-4475-b384-997c53e68a21","Type":"ContainerDied","Data":"f3c47a4d3995724da7f5646688d98947245b1f24c04b9dc007770aab847fd121"} Feb 24 10:46:10 crc kubenswrapper[4755]: I0224 10:46:10.087578 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48984: no serving certificate available for the kubelet" Feb 24 10:46:10 crc kubenswrapper[4755]: I0224 10:46:10.770387 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zw75" event={"ID":"152c9ef0-50a1-4475-b384-997c53e68a21","Type":"ContainerStarted","Data":"e494008707a036fbd0e51c1c56190b8229a41cc46c8436dc2d3313d1b8ca99e2"} Feb 24 10:46:10 crc kubenswrapper[4755]: I0224 10:46:10.809567 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6zw75" podStartSLOduration=2.203599277 podStartE2EDuration="4.809542508s" podCreationTimestamp="2026-02-24 10:46:06 +0000 UTC" firstStartedPulling="2026-02-24 10:46:07.73872827 +0000 UTC m=+3072.195250813" lastFinishedPulling="2026-02-24 10:46:10.344671501 +0000 UTC m=+3074.801194044" observedRunningTime="2026-02-24 10:46:10.80050245 +0000 UTC m=+3075.257025043" watchObservedRunningTime="2026-02-24 10:46:10.809542508 +0000 UTC m=+3075.266065071" Feb 24 10:46:12 crc kubenswrapper[4755]: I0224 10:46:12.179055 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48986: no serving certificate available for the kubelet" Feb 24 10:46:13 crc kubenswrapper[4755]: I0224 10:46:13.141724 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49002: no serving certificate available for the kubelet" Feb 24 10:46:15 crc kubenswrapper[4755]: I0224 10:46:15.237442 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53278: no serving certificate available for the kubelet" Feb 24 10:46:16 crc kubenswrapper[4755]: I0224 10:46:16.188328 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53294: no serving certificate available for the kubelet" Feb 24 10:46:16 crc kubenswrapper[4755]: I0224 10:46:16.838013 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:16 crc kubenswrapper[4755]: I0224 10:46:16.838179 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:16 crc kubenswrapper[4755]: I0224 10:46:16.881144 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:17 crc kubenswrapper[4755]: I0224 10:46:17.919881 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:17 crc kubenswrapper[4755]: I0224 10:46:17.966467 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6zw75"] Feb 24 10:46:18 crc kubenswrapper[4755]: I0224 10:46:18.285315 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53304: no serving certificate available for the kubelet" Feb 24 10:46:19 crc kubenswrapper[4755]: I0224 10:46:19.234755 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53308: no serving certificate available for the kubelet" Feb 24 10:46:19 crc kubenswrapper[4755]: I0224 10:46:19.857056 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6zw75" podUID="152c9ef0-50a1-4475-b384-997c53e68a21" containerName="registry-server" containerID="cri-o://e494008707a036fbd0e51c1c56190b8229a41cc46c8436dc2d3313d1b8ca99e2" gracePeriod=2 Feb 24 10:46:20 crc kubenswrapper[4755]: I0224 10:46:20.869535 4755 generic.go:334] "Generic (PLEG): container finished" podID="152c9ef0-50a1-4475-b384-997c53e68a21" containerID="e494008707a036fbd0e51c1c56190b8229a41cc46c8436dc2d3313d1b8ca99e2" exitCode=0 Feb 24 10:46:20 crc kubenswrapper[4755]: I0224 10:46:20.869655 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zw75" event={"ID":"152c9ef0-50a1-4475-b384-997c53e68a21","Type":"ContainerDied","Data":"e494008707a036fbd0e51c1c56190b8229a41cc46c8436dc2d3313d1b8ca99e2"} Feb 24 10:46:20 crc kubenswrapper[4755]: I0224 10:46:20.981947 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.142418 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152c9ef0-50a1-4475-b384-997c53e68a21-catalog-content\") pod \"152c9ef0-50a1-4475-b384-997c53e68a21\" (UID: \"152c9ef0-50a1-4475-b384-997c53e68a21\") " Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.142485 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152c9ef0-50a1-4475-b384-997c53e68a21-utilities\") pod \"152c9ef0-50a1-4475-b384-997c53e68a21\" (UID: \"152c9ef0-50a1-4475-b384-997c53e68a21\") " Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.142653 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7d86\" (UniqueName: \"kubernetes.io/projected/152c9ef0-50a1-4475-b384-997c53e68a21-kube-api-access-n7d86\") pod \"152c9ef0-50a1-4475-b384-997c53e68a21\" (UID: \"152c9ef0-50a1-4475-b384-997c53e68a21\") " Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.143828 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/152c9ef0-50a1-4475-b384-997c53e68a21-utilities" (OuterVolumeSpecName: "utilities") pod "152c9ef0-50a1-4475-b384-997c53e68a21" (UID: "152c9ef0-50a1-4475-b384-997c53e68a21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.155052 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/152c9ef0-50a1-4475-b384-997c53e68a21-kube-api-access-n7d86" (OuterVolumeSpecName: "kube-api-access-n7d86") pod "152c9ef0-50a1-4475-b384-997c53e68a21" (UID: "152c9ef0-50a1-4475-b384-997c53e68a21"). InnerVolumeSpecName "kube-api-access-n7d86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.204017 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/152c9ef0-50a1-4475-b384-997c53e68a21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "152c9ef0-50a1-4475-b384-997c53e68a21" (UID: "152c9ef0-50a1-4475-b384-997c53e68a21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.244453 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7d86\" (UniqueName: \"kubernetes.io/projected/152c9ef0-50a1-4475-b384-997c53e68a21-kube-api-access-n7d86\") on node \"crc\" DevicePath \"\"" Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.244490 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152c9ef0-50a1-4475-b384-997c53e68a21-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.244501 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152c9ef0-50a1-4475-b384-997c53e68a21-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.332048 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53322: no serving certificate available for the kubelet" Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.695185 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.695251 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.885994 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zw75" event={"ID":"152c9ef0-50a1-4475-b384-997c53e68a21","Type":"ContainerDied","Data":"d12de66636c8eb1f2cf43d08447e5d3acee6f61140b6d38ba3bb2e1e5ed38fba"} Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.886132 4755 scope.go:117] "RemoveContainer" containerID="e494008707a036fbd0e51c1c56190b8229a41cc46c8436dc2d3313d1b8ca99e2" Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.886142 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zw75" Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.922336 4755 scope.go:117] "RemoveContainer" containerID="f3c47a4d3995724da7f5646688d98947245b1f24c04b9dc007770aab847fd121" Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.970768 4755 scope.go:117] "RemoveContainer" containerID="9987d7c81ee63c74dcc6cf05fcb7166550ccb483b64ef16e78a4573b2e77d36d" Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.977236 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6zw75"] Feb 24 10:46:21 crc kubenswrapper[4755]: I0224 10:46:21.991672 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6zw75"] Feb 24 10:46:22 crc kubenswrapper[4755]: I0224 10:46:22.278523 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53338: no serving certificate available for the kubelet" Feb 24 10:46:22 crc kubenswrapper[4755]: I0224 10:46:22.333392 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="152c9ef0-50a1-4475-b384-997c53e68a21" path="/var/lib/kubelet/pods/152c9ef0-50a1-4475-b384-997c53e68a21/volumes" Feb 24 10:46:24 crc kubenswrapper[4755]: I0224 10:46:24.397729 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50348: no serving certificate available for the kubelet" Feb 24 10:46:25 crc kubenswrapper[4755]: I0224 10:46:25.323798 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50364: no serving certificate available for the kubelet" Feb 24 10:46:27 crc kubenswrapper[4755]: I0224 10:46:27.444785 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50376: no serving certificate available for the kubelet" Feb 24 10:46:28 crc kubenswrapper[4755]: I0224 10:46:28.376221 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50386: no serving certificate available for the kubelet" Feb 24 10:46:30 crc kubenswrapper[4755]: I0224 10:46:30.497804 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50396: no serving certificate available for the kubelet" Feb 24 10:46:31 crc kubenswrapper[4755]: I0224 10:46:31.413049 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50404: no serving certificate available for the kubelet" Feb 24 10:46:33 crc kubenswrapper[4755]: I0224 10:46:33.587139 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50408: no serving certificate available for the kubelet" Feb 24 10:46:34 crc kubenswrapper[4755]: I0224 10:46:34.462963 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41876: no serving certificate available for the kubelet" Feb 24 10:46:36 crc kubenswrapper[4755]: I0224 10:46:36.633990 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41878: no serving certificate available for the kubelet" Feb 24 10:46:37 crc kubenswrapper[4755]: I0224 10:46:37.526349 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41894: no serving certificate available for the kubelet" Feb 24 10:46:39 crc kubenswrapper[4755]: I0224 10:46:39.681454 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41896: no serving certificate available for the kubelet" Feb 24 10:46:40 crc kubenswrapper[4755]: I0224 10:46:40.578820 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41898: no serving certificate available for the kubelet" Feb 24 10:46:42 crc kubenswrapper[4755]: I0224 10:46:42.717896 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41906: no serving certificate available for the kubelet" Feb 24 10:46:43 crc kubenswrapper[4755]: I0224 10:46:43.646417 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41916: no serving certificate available for the kubelet" Feb 24 10:46:45 crc kubenswrapper[4755]: I0224 10:46:45.766848 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37610: no serving certificate available for the kubelet" Feb 24 10:46:46 crc kubenswrapper[4755]: I0224 10:46:46.696516 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37622: no serving certificate available for the kubelet" Feb 24 10:46:48 crc kubenswrapper[4755]: I0224 10:46:48.801643 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37636: no serving certificate available for the kubelet" Feb 24 10:46:49 crc kubenswrapper[4755]: I0224 10:46:49.742047 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37646: no serving certificate available for the kubelet" Feb 24 10:46:51 crc kubenswrapper[4755]: I0224 10:46:51.694740 4755 patch_prober.go:28] interesting pod/machine-config-daemon-8q7ll container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 10:46:51 crc kubenswrapper[4755]: I0224 10:46:51.695053 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 10:46:51 crc kubenswrapper[4755]: I0224 10:46:51.695109 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" Feb 24 10:46:51 crc kubenswrapper[4755]: I0224 10:46:51.695710 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258"} pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 10:46:51 crc kubenswrapper[4755]: I0224 10:46:51.695758 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerName="machine-config-daemon" containerID="cri-o://1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" gracePeriod=600 Feb 24 10:46:51 crc kubenswrapper[4755]: E0224 10:46:51.823037 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:46:51 crc kubenswrapper[4755]: I0224 10:46:51.860592 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37662: no serving certificate available for the kubelet" Feb 24 10:46:52 crc kubenswrapper[4755]: I0224 10:46:52.220842 4755 generic.go:334] "Generic (PLEG): container finished" podID="f6407399-185a-4b27-bd1d-d3816e43a0b5" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" exitCode=0 Feb 24 10:46:52 crc kubenswrapper[4755]: I0224 10:46:52.220930 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerDied","Data":"1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258"} Feb 24 10:46:52 crc kubenswrapper[4755]: I0224 10:46:52.221266 4755 scope.go:117] "RemoveContainer" containerID="e6973520b04ad15380b7bffb930a3b7d08f7087e679d5562e13d81f8ff1a623f" Feb 24 10:46:52 crc kubenswrapper[4755]: I0224 10:46:52.221960 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:46:52 crc kubenswrapper[4755]: E0224 10:46:52.222419 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:46:52 crc kubenswrapper[4755]: I0224 10:46:52.786111 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37668: no serving certificate available for the kubelet" Feb 24 10:46:54 crc kubenswrapper[4755]: I0224 10:46:54.992535 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43064: no serving certificate available for the kubelet" Feb 24 10:46:55 crc kubenswrapper[4755]: I0224 10:46:55.833974 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43068: no serving certificate available for the kubelet" Feb 24 10:46:58 crc kubenswrapper[4755]: I0224 10:46:58.023409 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43074: no serving certificate available for the kubelet" Feb 24 10:46:58 crc kubenswrapper[4755]: I0224 10:46:58.880773 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43078: no serving certificate available for the kubelet" Feb 24 10:47:01 crc kubenswrapper[4755]: I0224 10:47:01.070364 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43092: no serving certificate available for the kubelet" Feb 24 10:47:01 crc kubenswrapper[4755]: I0224 10:47:01.930970 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43100: no serving certificate available for the kubelet" Feb 24 10:47:04 crc kubenswrapper[4755]: I0224 10:47:04.123811 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56162: no serving certificate available for the kubelet" Feb 24 10:47:04 crc kubenswrapper[4755]: I0224 10:47:04.978949 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56170: no serving certificate available for the kubelet" Feb 24 10:47:07 crc kubenswrapper[4755]: I0224 10:47:07.178854 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56178: no serving certificate available for the kubelet" Feb 24 10:47:07 crc kubenswrapper[4755]: I0224 10:47:07.316734 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:47:07 crc kubenswrapper[4755]: E0224 10:47:07.317508 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:47:08 crc kubenswrapper[4755]: I0224 10:47:08.013509 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56182: no serving certificate available for the kubelet" Feb 24 10:47:09 crc kubenswrapper[4755]: I0224 10:47:09.417019 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" probeResult="failure" output=< Feb 24 10:47:09 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:47:09 crc kubenswrapper[4755]: > Feb 24 10:47:09 crc kubenswrapper[4755]: I0224 10:47:09.417548 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:47:09 crc kubenswrapper[4755]: I0224 10:47:09.418465 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"3a66d8dae34f6f20bcc5933cf924596e95ac093c06c4a681a04907cdab739f53"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:47:09 crc kubenswrapper[4755]: I0224 10:47:09.489102 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" containerID="cri-o://3a66d8dae34f6f20bcc5933cf924596e95ac093c06c4a681a04907cdab739f53" gracePeriod=30 Feb 24 10:47:10 crc kubenswrapper[4755]: I0224 10:47:10.232424 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56192: no serving certificate available for the kubelet" Feb 24 10:47:10 crc kubenswrapper[4755]: I0224 10:47:10.397736 4755 generic.go:334] "Generic (PLEG): container finished" podID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerID="3a66d8dae34f6f20bcc5933cf924596e95ac093c06c4a681a04907cdab739f53" exitCode=143 Feb 24 10:47:10 crc kubenswrapper[4755]: I0224 10:47:10.397791 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerDied","Data":"3a66d8dae34f6f20bcc5933cf924596e95ac093c06c4a681a04907cdab739f53"} Feb 24 10:47:10 crc kubenswrapper[4755]: I0224 10:47:10.397828 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerStarted","Data":"143341025cc371f0627ece60e54135ae4e4726a34d9c2a806765f32736f60a34"} Feb 24 10:47:10 crc kubenswrapper[4755]: I0224 10:47:10.397851 4755 scope.go:117] "RemoveContainer" containerID="63b43b5bff9a2589978e4d97ca45ea305c524e73be8cfeed915cc1ece0bd0323" Feb 24 10:47:11 crc kubenswrapper[4755]: I0224 10:47:11.060421 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56194: no serving certificate available for the kubelet" Feb 24 10:47:12 crc kubenswrapper[4755]: I0224 10:47:12.782909 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" probeResult="failure" output=< Feb 24 10:47:12 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:47:12 crc kubenswrapper[4755]: > Feb 24 10:47:12 crc kubenswrapper[4755]: I0224 10:47:12.783377 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:47:12 crc kubenswrapper[4755]: I0224 10:47:12.784472 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"ef4ca47fbdee8a817270c1b69e52b17aa4e27f34d8d6eed7da45026d0c6a003d"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:47:12 crc kubenswrapper[4755]: I0224 10:47:12.850502 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" containerID="cri-o://ef4ca47fbdee8a817270c1b69e52b17aa4e27f34d8d6eed7da45026d0c6a003d" gracePeriod=30 Feb 24 10:47:13 crc kubenswrapper[4755]: I0224 10:47:13.324230 4755 ???:1] "http: TLS handshake error from 192.168.126.11:56208: no serving certificate available for the kubelet" Feb 24 10:47:13 crc kubenswrapper[4755]: I0224 10:47:13.437002 4755 generic.go:334] "Generic (PLEG): container finished" podID="2f320527-691f-48e9-a243-f60bc805da39" containerID="ef4ca47fbdee8a817270c1b69e52b17aa4e27f34d8d6eed7da45026d0c6a003d" exitCode=143 Feb 24 10:47:13 crc kubenswrapper[4755]: I0224 10:47:13.437042 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerDied","Data":"ef4ca47fbdee8a817270c1b69e52b17aa4e27f34d8d6eed7da45026d0c6a003d"} Feb 24 10:47:13 crc kubenswrapper[4755]: I0224 10:47:13.437106 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerStarted","Data":"fb15228cab6b3b0ebbb161d141dbaf89fb26137ce9638882c45dd4e7edffc6ad"} Feb 24 10:47:13 crc kubenswrapper[4755]: I0224 10:47:13.437128 4755 scope.go:117] "RemoveContainer" containerID="a84b3cec371eaf91d0e311ef88e8da2d9e2fb0557b9c34f1a91d87c697bed797" Feb 24 10:47:14 crc kubenswrapper[4755]: I0224 10:47:14.115748 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50060: no serving certificate available for the kubelet" Feb 24 10:47:16 crc kubenswrapper[4755]: I0224 10:47:16.378109 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50074: no serving certificate available for the kubelet" Feb 24 10:47:17 crc kubenswrapper[4755]: I0224 10:47:17.168251 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50086: no serving certificate available for the kubelet" Feb 24 10:47:19 crc kubenswrapper[4755]: I0224 10:47:19.423365 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50096: no serving certificate available for the kubelet" Feb 24 10:47:19 crc kubenswrapper[4755]: I0224 10:47:19.887352 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:47:19 crc kubenswrapper[4755]: I0224 10:47:19.887395 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 10:47:20 crc kubenswrapper[4755]: I0224 10:47:20.206246 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50106: no serving certificate available for the kubelet" Feb 24 10:47:21 crc kubenswrapper[4755]: I0224 10:47:21.289274 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:47:21 crc kubenswrapper[4755]: I0224 10:47:21.289858 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 10:47:21 crc kubenswrapper[4755]: I0224 10:47:21.316939 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:47:21 crc kubenswrapper[4755]: E0224 10:47:21.317403 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:47:22 crc kubenswrapper[4755]: I0224 10:47:22.459844 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50120: no serving certificate available for the kubelet" Feb 24 10:47:23 crc kubenswrapper[4755]: I0224 10:47:23.264540 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50132: no serving certificate available for the kubelet" Feb 24 10:47:25 crc kubenswrapper[4755]: I0224 10:47:25.500566 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46526: no serving certificate available for the kubelet" Feb 24 10:47:26 crc kubenswrapper[4755]: I0224 10:47:26.308018 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46528: no serving certificate available for the kubelet" Feb 24 10:47:28 crc kubenswrapper[4755]: I0224 10:47:28.554658 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46532: no serving certificate available for the kubelet" Feb 24 10:47:29 crc kubenswrapper[4755]: I0224 10:47:29.358095 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46538: no serving certificate available for the kubelet" Feb 24 10:47:31 crc kubenswrapper[4755]: I0224 10:47:31.614391 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46540: no serving certificate available for the kubelet" Feb 24 10:47:32 crc kubenswrapper[4755]: I0224 10:47:32.398512 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46546: no serving certificate available for the kubelet" Feb 24 10:47:34 crc kubenswrapper[4755]: I0224 10:47:34.317300 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:47:34 crc kubenswrapper[4755]: E0224 10:47:34.318037 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:47:34 crc kubenswrapper[4755]: I0224 10:47:34.660386 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49492: no serving certificate available for the kubelet" Feb 24 10:47:35 crc kubenswrapper[4755]: I0224 10:47:35.445312 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49494: no serving certificate available for the kubelet" Feb 24 10:47:37 crc kubenswrapper[4755]: I0224 10:47:37.712512 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49504: no serving certificate available for the kubelet" Feb 24 10:47:38 crc kubenswrapper[4755]: I0224 10:47:38.509180 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49520: no serving certificate available for the kubelet" Feb 24 10:47:40 crc kubenswrapper[4755]: I0224 10:47:40.755554 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49528: no serving certificate available for the kubelet" Feb 24 10:47:41 crc kubenswrapper[4755]: I0224 10:47:41.560144 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49542: no serving certificate available for the kubelet" Feb 24 10:47:43 crc kubenswrapper[4755]: I0224 10:47:43.792558 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39358: no serving certificate available for the kubelet" Feb 24 10:47:44 crc kubenswrapper[4755]: I0224 10:47:44.616222 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39368: no serving certificate available for the kubelet" Feb 24 10:47:46 crc kubenswrapper[4755]: I0224 10:47:46.848425 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39384: no serving certificate available for the kubelet" Feb 24 10:47:47 crc kubenswrapper[4755]: I0224 10:47:47.656281 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39386: no serving certificate available for the kubelet" Feb 24 10:47:49 crc kubenswrapper[4755]: I0224 10:47:49.329495 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:47:49 crc kubenswrapper[4755]: E0224 10:47:49.330083 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:47:49 crc kubenswrapper[4755]: I0224 10:47:49.903353 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39398: no serving certificate available for the kubelet" Feb 24 10:47:50 crc kubenswrapper[4755]: I0224 10:47:50.711888 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39410: no serving certificate available for the kubelet" Feb 24 10:47:52 crc kubenswrapper[4755]: I0224 10:47:52.957459 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39424: no serving certificate available for the kubelet" Feb 24 10:47:53 crc kubenswrapper[4755]: I0224 10:47:53.765084 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43392: no serving certificate available for the kubelet" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.017283 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43394: no serving certificate available for the kubelet" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.749729 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2rtzm"] Feb 24 10:47:56 crc kubenswrapper[4755]: E0224 10:47:56.750653 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152c9ef0-50a1-4475-b384-997c53e68a21" containerName="registry-server" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.750683 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="152c9ef0-50a1-4475-b384-997c53e68a21" containerName="registry-server" Feb 24 10:47:56 crc kubenswrapper[4755]: E0224 10:47:56.750710 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152c9ef0-50a1-4475-b384-997c53e68a21" containerName="extract-utilities" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.750724 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="152c9ef0-50a1-4475-b384-997c53e68a21" containerName="extract-utilities" Feb 24 10:47:56 crc kubenswrapper[4755]: E0224 10:47:56.750753 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152c9ef0-50a1-4475-b384-997c53e68a21" containerName="extract-content" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.750765 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="152c9ef0-50a1-4475-b384-997c53e68a21" containerName="extract-content" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.751013 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="152c9ef0-50a1-4475-b384-997c53e68a21" containerName="registry-server" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.752897 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.780198 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2rtzm"] Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.826353 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43408: no serving certificate available for the kubelet" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.853706 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27wdh\" (UniqueName: \"kubernetes.io/projected/f54ed1dd-5d03-4ff9-9ffe-532270edf75c-kube-api-access-27wdh\") pod \"certified-operators-2rtzm\" (UID: \"f54ed1dd-5d03-4ff9-9ffe-532270edf75c\") " pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.853985 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f54ed1dd-5d03-4ff9-9ffe-532270edf75c-catalog-content\") pod \"certified-operators-2rtzm\" (UID: \"f54ed1dd-5d03-4ff9-9ffe-532270edf75c\") " pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.854130 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f54ed1dd-5d03-4ff9-9ffe-532270edf75c-utilities\") pod \"certified-operators-2rtzm\" (UID: \"f54ed1dd-5d03-4ff9-9ffe-532270edf75c\") " pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.955880 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f54ed1dd-5d03-4ff9-9ffe-532270edf75c-catalog-content\") pod \"certified-operators-2rtzm\" (UID: \"f54ed1dd-5d03-4ff9-9ffe-532270edf75c\") " pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.955969 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f54ed1dd-5d03-4ff9-9ffe-532270edf75c-utilities\") pod \"certified-operators-2rtzm\" (UID: \"f54ed1dd-5d03-4ff9-9ffe-532270edf75c\") " pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.956052 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f54ed1dd-5d03-4ff9-9ffe-532270edf75c-catalog-content\") pod \"certified-operators-2rtzm\" (UID: \"f54ed1dd-5d03-4ff9-9ffe-532270edf75c\") " pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.956116 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27wdh\" (UniqueName: \"kubernetes.io/projected/f54ed1dd-5d03-4ff9-9ffe-532270edf75c-kube-api-access-27wdh\") pod \"certified-operators-2rtzm\" (UID: \"f54ed1dd-5d03-4ff9-9ffe-532270edf75c\") " pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.956406 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f54ed1dd-5d03-4ff9-9ffe-532270edf75c-utilities\") pod \"certified-operators-2rtzm\" (UID: \"f54ed1dd-5d03-4ff9-9ffe-532270edf75c\") " pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:47:56 crc kubenswrapper[4755]: I0224 10:47:56.980397 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27wdh\" (UniqueName: \"kubernetes.io/projected/f54ed1dd-5d03-4ff9-9ffe-532270edf75c-kube-api-access-27wdh\") pod \"certified-operators-2rtzm\" (UID: \"f54ed1dd-5d03-4ff9-9ffe-532270edf75c\") " pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:47:57 crc kubenswrapper[4755]: I0224 10:47:57.082139 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:47:57 crc kubenswrapper[4755]: I0224 10:47:57.596390 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2rtzm"] Feb 24 10:47:57 crc kubenswrapper[4755]: I0224 10:47:57.880426 4755 generic.go:334] "Generic (PLEG): container finished" podID="f54ed1dd-5d03-4ff9-9ffe-532270edf75c" containerID="37060337d6a2a2b2e3a07c144143bc1ea65071817f32146be18f390ed7e4c371" exitCode=0 Feb 24 10:47:57 crc kubenswrapper[4755]: I0224 10:47:57.880481 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rtzm" event={"ID":"f54ed1dd-5d03-4ff9-9ffe-532270edf75c","Type":"ContainerDied","Data":"37060337d6a2a2b2e3a07c144143bc1ea65071817f32146be18f390ed7e4c371"} Feb 24 10:47:57 crc kubenswrapper[4755]: I0224 10:47:57.880724 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rtzm" event={"ID":"f54ed1dd-5d03-4ff9-9ffe-532270edf75c","Type":"ContainerStarted","Data":"9dd0561e69cdf156630eef19c0a9e4dfcd311970c147082cc687805ad64b2a2f"} Feb 24 10:47:59 crc kubenswrapper[4755]: I0224 10:47:59.052876 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43416: no serving certificate available for the kubelet" Feb 24 10:47:59 crc kubenswrapper[4755]: I0224 10:47:59.888928 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43418: no serving certificate available for the kubelet" Feb 24 10:48:00 crc kubenswrapper[4755]: I0224 10:48:00.316786 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:48:00 crc kubenswrapper[4755]: E0224 10:48:00.317270 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:48:01 crc kubenswrapper[4755]: I0224 10:48:01.916838 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rtzm" event={"ID":"f54ed1dd-5d03-4ff9-9ffe-532270edf75c","Type":"ContainerStarted","Data":"0840d4f74c6e0f7db9d6131a28b29019a4568fde3852eac4f9f6380506d7506b"} Feb 24 10:48:02 crc kubenswrapper[4755]: I0224 10:48:02.091952 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43426: no serving certificate available for the kubelet" Feb 24 10:48:02 crc kubenswrapper[4755]: I0224 10:48:02.925925 4755 generic.go:334] "Generic (PLEG): container finished" podID="f54ed1dd-5d03-4ff9-9ffe-532270edf75c" containerID="0840d4f74c6e0f7db9d6131a28b29019a4568fde3852eac4f9f6380506d7506b" exitCode=0 Feb 24 10:48:02 crc kubenswrapper[4755]: I0224 10:48:02.925980 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rtzm" event={"ID":"f54ed1dd-5d03-4ff9-9ffe-532270edf75c","Type":"ContainerDied","Data":"0840d4f74c6e0f7db9d6131a28b29019a4568fde3852eac4f9f6380506d7506b"} Feb 24 10:48:02 crc kubenswrapper[4755]: I0224 10:48:02.936671 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43440: no serving certificate available for the kubelet" Feb 24 10:48:03 crc kubenswrapper[4755]: I0224 10:48:03.943417 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rtzm" event={"ID":"f54ed1dd-5d03-4ff9-9ffe-532270edf75c","Type":"ContainerStarted","Data":"f987b1c2daffeda1c0395dd66e019621107fc8fa59ceac0bb36fa278a7636371"} Feb 24 10:48:03 crc kubenswrapper[4755]: I0224 10:48:03.964019 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2rtzm" podStartSLOduration=2.201963119 podStartE2EDuration="7.963995612s" podCreationTimestamp="2026-02-24 10:47:56 +0000 UTC" firstStartedPulling="2026-02-24 10:47:57.882673704 +0000 UTC m=+3182.339196297" lastFinishedPulling="2026-02-24 10:48:03.644706247 +0000 UTC m=+3188.101228790" observedRunningTime="2026-02-24 10:48:03.960479734 +0000 UTC m=+3188.417002297" watchObservedRunningTime="2026-02-24 10:48:03.963995612 +0000 UTC m=+3188.420518155" Feb 24 10:48:05 crc kubenswrapper[4755]: I0224 10:48:05.138273 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54922: no serving certificate available for the kubelet" Feb 24 10:48:05 crc kubenswrapper[4755]: I0224 10:48:05.991706 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54926: no serving certificate available for the kubelet" Feb 24 10:48:07 crc kubenswrapper[4755]: I0224 10:48:07.082603 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:48:07 crc kubenswrapper[4755]: I0224 10:48:07.083121 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:48:07 crc kubenswrapper[4755]: I0224 10:48:07.171283 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:48:08 crc kubenswrapper[4755]: I0224 10:48:08.200497 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54930: no serving certificate available for the kubelet" Feb 24 10:48:09 crc kubenswrapper[4755]: I0224 10:48:09.027508 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54944: no serving certificate available for the kubelet" Feb 24 10:48:11 crc kubenswrapper[4755]: I0224 10:48:11.259801 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54956: no serving certificate available for the kubelet" Feb 24 10:48:11 crc kubenswrapper[4755]: I0224 10:48:11.315952 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:48:11 crc kubenswrapper[4755]: E0224 10:48:11.316525 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:48:12 crc kubenswrapper[4755]: I0224 10:48:12.072384 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54972: no serving certificate available for the kubelet" Feb 24 10:48:14 crc kubenswrapper[4755]: I0224 10:48:14.320624 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32926: no serving certificate available for the kubelet" Feb 24 10:48:15 crc kubenswrapper[4755]: I0224 10:48:15.107976 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32934: no serving certificate available for the kubelet" Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.146745 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2rtzm" Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.236697 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2rtzm"] Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.286398 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k4trd"] Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.286722 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k4trd" podUID="481780d1-bd6b-4674-a476-9e10935c1927" containerName="registry-server" containerID="cri-o://ab0d2b2a0a65767b5599eff7994ae7382cd47cea5bb5cfaae79d888485c11508" gracePeriod=2 Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.366467 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32948: no serving certificate available for the kubelet" Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.683568 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.834592 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zplmm\" (UniqueName: \"kubernetes.io/projected/481780d1-bd6b-4674-a476-9e10935c1927-kube-api-access-zplmm\") pod \"481780d1-bd6b-4674-a476-9e10935c1927\" (UID: \"481780d1-bd6b-4674-a476-9e10935c1927\") " Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.834688 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481780d1-bd6b-4674-a476-9e10935c1927-catalog-content\") pod \"481780d1-bd6b-4674-a476-9e10935c1927\" (UID: \"481780d1-bd6b-4674-a476-9e10935c1927\") " Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.834865 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481780d1-bd6b-4674-a476-9e10935c1927-utilities\") pod \"481780d1-bd6b-4674-a476-9e10935c1927\" (UID: \"481780d1-bd6b-4674-a476-9e10935c1927\") " Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.836116 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481780d1-bd6b-4674-a476-9e10935c1927-utilities" (OuterVolumeSpecName: "utilities") pod "481780d1-bd6b-4674-a476-9e10935c1927" (UID: "481780d1-bd6b-4674-a476-9e10935c1927"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.848966 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481780d1-bd6b-4674-a476-9e10935c1927-kube-api-access-zplmm" (OuterVolumeSpecName: "kube-api-access-zplmm") pod "481780d1-bd6b-4674-a476-9e10935c1927" (UID: "481780d1-bd6b-4674-a476-9e10935c1927"). InnerVolumeSpecName "kube-api-access-zplmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.900447 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481780d1-bd6b-4674-a476-9e10935c1927-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "481780d1-bd6b-4674-a476-9e10935c1927" (UID: "481780d1-bd6b-4674-a476-9e10935c1927"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.938609 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481780d1-bd6b-4674-a476-9e10935c1927-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.938665 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zplmm\" (UniqueName: \"kubernetes.io/projected/481780d1-bd6b-4674-a476-9e10935c1927-kube-api-access-zplmm\") on node \"crc\" DevicePath \"\"" Feb 24 10:48:17 crc kubenswrapper[4755]: I0224 10:48:17.938679 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481780d1-bd6b-4674-a476-9e10935c1927-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.083744 4755 generic.go:334] "Generic (PLEG): container finished" podID="481780d1-bd6b-4674-a476-9e10935c1927" containerID="ab0d2b2a0a65767b5599eff7994ae7382cd47cea5bb5cfaae79d888485c11508" exitCode=0 Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.083802 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4trd" Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.083837 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4trd" event={"ID":"481780d1-bd6b-4674-a476-9e10935c1927","Type":"ContainerDied","Data":"ab0d2b2a0a65767b5599eff7994ae7382cd47cea5bb5cfaae79d888485c11508"} Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.083899 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4trd" event={"ID":"481780d1-bd6b-4674-a476-9e10935c1927","Type":"ContainerDied","Data":"8a7061440280816dcbe778611812428337bec0bf5131ff93d630ed521e5b7157"} Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.083917 4755 scope.go:117] "RemoveContainer" containerID="ab0d2b2a0a65767b5599eff7994ae7382cd47cea5bb5cfaae79d888485c11508" Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.100719 4755 scope.go:117] "RemoveContainer" containerID="5d9fb6702f511a2f17e80d41ab0f00b03ada6a0a6149a70e3544c1917023191e" Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.117281 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k4trd"] Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.123303 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k4trd"] Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.128157 4755 scope.go:117] "RemoveContainer" containerID="60b54b2ceb2b0a3344bacb81884675d854f347e3a06b1bedcb5a81e175e4fd0b" Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.152783 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32960: no serving certificate available for the kubelet" Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.165640 4755 scope.go:117] "RemoveContainer" containerID="ab0d2b2a0a65767b5599eff7994ae7382cd47cea5bb5cfaae79d888485c11508" Feb 24 10:48:18 crc kubenswrapper[4755]: E0224 10:48:18.166250 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab0d2b2a0a65767b5599eff7994ae7382cd47cea5bb5cfaae79d888485c11508\": container with ID starting with ab0d2b2a0a65767b5599eff7994ae7382cd47cea5bb5cfaae79d888485c11508 not found: ID does not exist" containerID="ab0d2b2a0a65767b5599eff7994ae7382cd47cea5bb5cfaae79d888485c11508" Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.166392 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab0d2b2a0a65767b5599eff7994ae7382cd47cea5bb5cfaae79d888485c11508"} err="failed to get container status \"ab0d2b2a0a65767b5599eff7994ae7382cd47cea5bb5cfaae79d888485c11508\": rpc error: code = NotFound desc = could not find container \"ab0d2b2a0a65767b5599eff7994ae7382cd47cea5bb5cfaae79d888485c11508\": container with ID starting with ab0d2b2a0a65767b5599eff7994ae7382cd47cea5bb5cfaae79d888485c11508 not found: ID does not exist" Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.166497 4755 scope.go:117] "RemoveContainer" containerID="5d9fb6702f511a2f17e80d41ab0f00b03ada6a0a6149a70e3544c1917023191e" Feb 24 10:48:18 crc kubenswrapper[4755]: E0224 10:48:18.166997 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d9fb6702f511a2f17e80d41ab0f00b03ada6a0a6149a70e3544c1917023191e\": container with ID starting with 5d9fb6702f511a2f17e80d41ab0f00b03ada6a0a6149a70e3544c1917023191e not found: ID does not exist" containerID="5d9fb6702f511a2f17e80d41ab0f00b03ada6a0a6149a70e3544c1917023191e" Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.167037 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9fb6702f511a2f17e80d41ab0f00b03ada6a0a6149a70e3544c1917023191e"} err="failed to get container status \"5d9fb6702f511a2f17e80d41ab0f00b03ada6a0a6149a70e3544c1917023191e\": rpc error: code = NotFound desc = could not find container \"5d9fb6702f511a2f17e80d41ab0f00b03ada6a0a6149a70e3544c1917023191e\": container with ID starting with 5d9fb6702f511a2f17e80d41ab0f00b03ada6a0a6149a70e3544c1917023191e not found: ID does not exist" Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.167082 4755 scope.go:117] "RemoveContainer" containerID="60b54b2ceb2b0a3344bacb81884675d854f347e3a06b1bedcb5a81e175e4fd0b" Feb 24 10:48:18 crc kubenswrapper[4755]: E0224 10:48:18.167318 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60b54b2ceb2b0a3344bacb81884675d854f347e3a06b1bedcb5a81e175e4fd0b\": container with ID starting with 60b54b2ceb2b0a3344bacb81884675d854f347e3a06b1bedcb5a81e175e4fd0b not found: ID does not exist" containerID="60b54b2ceb2b0a3344bacb81884675d854f347e3a06b1bedcb5a81e175e4fd0b" Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.167339 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b54b2ceb2b0a3344bacb81884675d854f347e3a06b1bedcb5a81e175e4fd0b"} err="failed to get container status \"60b54b2ceb2b0a3344bacb81884675d854f347e3a06b1bedcb5a81e175e4fd0b\": rpc error: code = NotFound desc = could not find container \"60b54b2ceb2b0a3344bacb81884675d854f347e3a06b1bedcb5a81e175e4fd0b\": container with ID starting with 60b54b2ceb2b0a3344bacb81884675d854f347e3a06b1bedcb5a81e175e4fd0b not found: ID does not exist" Feb 24 10:48:18 crc kubenswrapper[4755]: I0224 10:48:18.326780 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481780d1-bd6b-4674-a476-9e10935c1927" path="/var/lib/kubelet/pods/481780d1-bd6b-4674-a476-9e10935c1927/volumes" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.424559 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32964: no serving certificate available for the kubelet" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.565272 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rcvqm/must-gather-5ssns"] Feb 24 10:48:20 crc kubenswrapper[4755]: E0224 10:48:20.565583 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481780d1-bd6b-4674-a476-9e10935c1927" containerName="extract-utilities" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.565598 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="481780d1-bd6b-4674-a476-9e10935c1927" containerName="extract-utilities" Feb 24 10:48:20 crc kubenswrapper[4755]: E0224 10:48:20.565632 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481780d1-bd6b-4674-a476-9e10935c1927" containerName="registry-server" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.565640 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="481780d1-bd6b-4674-a476-9e10935c1927" containerName="registry-server" Feb 24 10:48:20 crc kubenswrapper[4755]: E0224 10:48:20.565654 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481780d1-bd6b-4674-a476-9e10935c1927" containerName="extract-content" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.565660 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="481780d1-bd6b-4674-a476-9e10935c1927" containerName="extract-content" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.565826 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="481780d1-bd6b-4674-a476-9e10935c1927" containerName="registry-server" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.566596 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcvqm/must-gather-5ssns" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.568620 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rcvqm"/"openshift-service-ca.crt" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.568730 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rcvqm"/"default-dockercfg-6stdd" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.570120 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rcvqm"/"kube-root-ca.crt" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.630336 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rcvqm/must-gather-5ssns"] Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.697923 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcw4l\" (UniqueName: \"kubernetes.io/projected/0c8dd3be-6e22-4240-a626-dc83b7a646d7-kube-api-access-fcw4l\") pod \"must-gather-5ssns\" (UID: \"0c8dd3be-6e22-4240-a626-dc83b7a646d7\") " pod="openshift-must-gather-rcvqm/must-gather-5ssns" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.698320 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0c8dd3be-6e22-4240-a626-dc83b7a646d7-must-gather-output\") pod \"must-gather-5ssns\" (UID: \"0c8dd3be-6e22-4240-a626-dc83b7a646d7\") " pod="openshift-must-gather-rcvqm/must-gather-5ssns" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.799940 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0c8dd3be-6e22-4240-a626-dc83b7a646d7-must-gather-output\") pod \"must-gather-5ssns\" (UID: \"0c8dd3be-6e22-4240-a626-dc83b7a646d7\") " pod="openshift-must-gather-rcvqm/must-gather-5ssns" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.800297 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcw4l\" (UniqueName: \"kubernetes.io/projected/0c8dd3be-6e22-4240-a626-dc83b7a646d7-kube-api-access-fcw4l\") pod \"must-gather-5ssns\" (UID: \"0c8dd3be-6e22-4240-a626-dc83b7a646d7\") " pod="openshift-must-gather-rcvqm/must-gather-5ssns" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.800422 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0c8dd3be-6e22-4240-a626-dc83b7a646d7-must-gather-output\") pod \"must-gather-5ssns\" (UID: \"0c8dd3be-6e22-4240-a626-dc83b7a646d7\") " pod="openshift-must-gather-rcvqm/must-gather-5ssns" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.817305 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcw4l\" (UniqueName: \"kubernetes.io/projected/0c8dd3be-6e22-4240-a626-dc83b7a646d7-kube-api-access-fcw4l\") pod \"must-gather-5ssns\" (UID: \"0c8dd3be-6e22-4240-a626-dc83b7a646d7\") " pod="openshift-must-gather-rcvqm/must-gather-5ssns" Feb 24 10:48:20 crc kubenswrapper[4755]: I0224 10:48:20.883203 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcvqm/must-gather-5ssns" Feb 24 10:48:21 crc kubenswrapper[4755]: I0224 10:48:21.185080 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32970: no serving certificate available for the kubelet" Feb 24 10:48:21 crc kubenswrapper[4755]: I0224 10:48:21.342556 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rcvqm/must-gather-5ssns"] Feb 24 10:48:21 crc kubenswrapper[4755]: W0224 10:48:21.351538 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c8dd3be_6e22_4240_a626_dc83b7a646d7.slice/crio-bec881a04b68d64b27753ce77feba1e5217ccc02dad627349cd694852486bbb6 WatchSource:0}: Error finding container bec881a04b68d64b27753ce77feba1e5217ccc02dad627349cd694852486bbb6: Status 404 returned error can't find the container with id bec881a04b68d64b27753ce77feba1e5217ccc02dad627349cd694852486bbb6 Feb 24 10:48:22 crc kubenswrapper[4755]: I0224 10:48:22.132228 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcvqm/must-gather-5ssns" event={"ID":"0c8dd3be-6e22-4240-a626-dc83b7a646d7","Type":"ContainerStarted","Data":"bec881a04b68d64b27753ce77feba1e5217ccc02dad627349cd694852486bbb6"} Feb 24 10:48:22 crc kubenswrapper[4755]: I0224 10:48:22.316861 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:48:22 crc kubenswrapper[4755]: E0224 10:48:22.317260 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:48:23 crc kubenswrapper[4755]: I0224 10:48:23.467968 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32972: no serving certificate available for the kubelet" Feb 24 10:48:24 crc kubenswrapper[4755]: I0224 10:48:24.220760 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50838: no serving certificate available for the kubelet" Feb 24 10:48:26 crc kubenswrapper[4755]: I0224 10:48:26.503951 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50844: no serving certificate available for the kubelet" Feb 24 10:48:27 crc kubenswrapper[4755]: I0224 10:48:27.271653 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50860: no serving certificate available for the kubelet" Feb 24 10:48:29 crc kubenswrapper[4755]: I0224 10:48:29.587870 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50866: no serving certificate available for the kubelet" Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.207907 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcvqm/must-gather-5ssns" event={"ID":"0c8dd3be-6e22-4240-a626-dc83b7a646d7","Type":"ContainerStarted","Data":"b202ad9d05d75504bf6cc1655b12b19f4ccbf5b27bc3674fdc3c2f7916a993ba"} Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.208283 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcvqm/must-gather-5ssns" event={"ID":"0c8dd3be-6e22-4240-a626-dc83b7a646d7","Type":"ContainerStarted","Data":"ac8ac232dbb29a435e990e1d3e301b5751c27f11041e8984ee6057f0cffdaede"} Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.234315 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rcvqm/must-gather-5ssns" podStartSLOduration=2.040493952 podStartE2EDuration="10.234294157s" podCreationTimestamp="2026-02-24 10:48:20 +0000 UTC" firstStartedPulling="2026-02-24 10:48:21.35453886 +0000 UTC m=+3205.811061403" lastFinishedPulling="2026-02-24 10:48:29.548339065 +0000 UTC m=+3214.004861608" observedRunningTime="2026-02-24 10:48:30.225014992 +0000 UTC m=+3214.681537555" watchObservedRunningTime="2026-02-24 10:48:30.234294157 +0000 UTC m=+3214.690816710" Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.321411 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50880: no serving certificate available for the kubelet" Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.561510 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50888: no serving certificate available for the kubelet" Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.601128 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rcvqm/crc-debug-6lkbp"] Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.603683 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcvqm/crc-debug-6lkbp" Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.701981 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd7a7c6e-162b-44f2-96ac-6f09e07fd37a-host\") pod \"crc-debug-6lkbp\" (UID: \"fd7a7c6e-162b-44f2-96ac-6f09e07fd37a\") " pod="openshift-must-gather-rcvqm/crc-debug-6lkbp" Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.702190 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzp44\" (UniqueName: \"kubernetes.io/projected/fd7a7c6e-162b-44f2-96ac-6f09e07fd37a-kube-api-access-pzp44\") pod \"crc-debug-6lkbp\" (UID: \"fd7a7c6e-162b-44f2-96ac-6f09e07fd37a\") " pod="openshift-must-gather-rcvqm/crc-debug-6lkbp" Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.803237 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd7a7c6e-162b-44f2-96ac-6f09e07fd37a-host\") pod \"crc-debug-6lkbp\" (UID: \"fd7a7c6e-162b-44f2-96ac-6f09e07fd37a\") " pod="openshift-must-gather-rcvqm/crc-debug-6lkbp" Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.803368 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd7a7c6e-162b-44f2-96ac-6f09e07fd37a-host\") pod \"crc-debug-6lkbp\" (UID: \"fd7a7c6e-162b-44f2-96ac-6f09e07fd37a\") " pod="openshift-must-gather-rcvqm/crc-debug-6lkbp" Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.803568 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzp44\" (UniqueName: \"kubernetes.io/projected/fd7a7c6e-162b-44f2-96ac-6f09e07fd37a-kube-api-access-pzp44\") pod \"crc-debug-6lkbp\" (UID: \"fd7a7c6e-162b-44f2-96ac-6f09e07fd37a\") " pod="openshift-must-gather-rcvqm/crc-debug-6lkbp" Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.833186 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzp44\" (UniqueName: \"kubernetes.io/projected/fd7a7c6e-162b-44f2-96ac-6f09e07fd37a-kube-api-access-pzp44\") pod \"crc-debug-6lkbp\" (UID: \"fd7a7c6e-162b-44f2-96ac-6f09e07fd37a\") " pod="openshift-must-gather-rcvqm/crc-debug-6lkbp" Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.918744 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcvqm/crc-debug-6lkbp" Feb 24 10:48:30 crc kubenswrapper[4755]: W0224 10:48:30.965117 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd7a7c6e_162b_44f2_96ac_6f09e07fd37a.slice/crio-f261adf99a0baef9a9b36d37edf19d79aab12d739a82e02d3431e5b3d4065161 WatchSource:0}: Error finding container f261adf99a0baef9a9b36d37edf19d79aab12d739a82e02d3431e5b3d4065161: Status 404 returned error can't find the container with id f261adf99a0baef9a9b36d37edf19d79aab12d739a82e02d3431e5b3d4065161 Feb 24 10:48:30 crc kubenswrapper[4755]: I0224 10:48:30.967669 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 10:48:31 crc kubenswrapper[4755]: I0224 10:48:31.215994 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcvqm/crc-debug-6lkbp" event={"ID":"fd7a7c6e-162b-44f2-96ac-6f09e07fd37a","Type":"ContainerStarted","Data":"f261adf99a0baef9a9b36d37edf19d79aab12d739a82e02d3431e5b3d4065161"} Feb 24 10:48:32 crc kubenswrapper[4755]: I0224 10:48:32.632455 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50900: no serving certificate available for the kubelet" Feb 24 10:48:33 crc kubenswrapper[4755]: I0224 10:48:33.358585 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50914: no serving certificate available for the kubelet" Feb 24 10:48:35 crc kubenswrapper[4755]: I0224 10:48:35.315895 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:48:35 crc kubenswrapper[4755]: E0224 10:48:35.316485 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:48:35 crc kubenswrapper[4755]: I0224 10:48:35.667608 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32866: no serving certificate available for the kubelet" Feb 24 10:48:36 crc kubenswrapper[4755]: I0224 10:48:36.391113 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32882: no serving certificate available for the kubelet" Feb 24 10:48:38 crc kubenswrapper[4755]: I0224 10:48:38.704407 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32884: no serving certificate available for the kubelet" Feb 24 10:48:39 crc kubenswrapper[4755]: I0224 10:48:39.425992 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32896: no serving certificate available for the kubelet" Feb 24 10:48:41 crc kubenswrapper[4755]: I0224 10:48:41.736550 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32910: no serving certificate available for the kubelet" Feb 24 10:48:42 crc kubenswrapper[4755]: I0224 10:48:42.304493 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcvqm/crc-debug-6lkbp" event={"ID":"fd7a7c6e-162b-44f2-96ac-6f09e07fd37a","Type":"ContainerStarted","Data":"048ef061b7a1b7567b1e327ac9d3a5dca130511df6606204c035980f6f52795e"} Feb 24 10:48:42 crc kubenswrapper[4755]: I0224 10:48:42.318915 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rcvqm/crc-debug-6lkbp" podStartSLOduration=1.5543297900000002 podStartE2EDuration="12.318889219s" podCreationTimestamp="2026-02-24 10:48:30 +0000 UTC" firstStartedPulling="2026-02-24 10:48:30.96728955 +0000 UTC m=+3215.423812103" lastFinishedPulling="2026-02-24 10:48:41.731848989 +0000 UTC m=+3226.188371532" observedRunningTime="2026-02-24 10:48:42.316738343 +0000 UTC m=+3226.773260886" watchObservedRunningTime="2026-02-24 10:48:42.318889219 +0000 UTC m=+3226.775411812" Feb 24 10:48:42 crc kubenswrapper[4755]: I0224 10:48:42.325560 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32918: no serving certificate available for the kubelet" Feb 24 10:48:42 crc kubenswrapper[4755]: I0224 10:48:42.341151 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rcvqm/crc-debug-6lkbp"] Feb 24 10:48:42 crc kubenswrapper[4755]: I0224 10:48:42.346929 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rcvqm/crc-debug-6lkbp"] Feb 24 10:48:42 crc kubenswrapper[4755]: I0224 10:48:42.473496 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32932: no serving certificate available for the kubelet" Feb 24 10:48:43 crc kubenswrapper[4755]: I0224 10:48:43.352526 4755 ???:1] "http: TLS handshake error from 192.168.126.11:32942: no serving certificate available for the kubelet" Feb 24 10:48:44 crc kubenswrapper[4755]: I0224 10:48:44.319379 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rcvqm/crc-debug-6lkbp" podUID="fd7a7c6e-162b-44f2-96ac-6f09e07fd37a" containerName="container-00" containerID="cri-o://048ef061b7a1b7567b1e327ac9d3a5dca130511df6606204c035980f6f52795e" gracePeriod=2 Feb 24 10:48:44 crc kubenswrapper[4755]: I0224 10:48:44.516265 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcvqm/crc-debug-6lkbp" Feb 24 10:48:44 crc kubenswrapper[4755]: I0224 10:48:44.634373 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzp44\" (UniqueName: \"kubernetes.io/projected/fd7a7c6e-162b-44f2-96ac-6f09e07fd37a-kube-api-access-pzp44\") pod \"fd7a7c6e-162b-44f2-96ac-6f09e07fd37a\" (UID: \"fd7a7c6e-162b-44f2-96ac-6f09e07fd37a\") " Feb 24 10:48:44 crc kubenswrapper[4755]: I0224 10:48:44.634631 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd7a7c6e-162b-44f2-96ac-6f09e07fd37a-host\") pod \"fd7a7c6e-162b-44f2-96ac-6f09e07fd37a\" (UID: \"fd7a7c6e-162b-44f2-96ac-6f09e07fd37a\") " Feb 24 10:48:44 crc kubenswrapper[4755]: I0224 10:48:44.634779 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd7a7c6e-162b-44f2-96ac-6f09e07fd37a-host" (OuterVolumeSpecName: "host") pod "fd7a7c6e-162b-44f2-96ac-6f09e07fd37a" (UID: "fd7a7c6e-162b-44f2-96ac-6f09e07fd37a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 10:48:44 crc kubenswrapper[4755]: I0224 10:48:44.635367 4755 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd7a7c6e-162b-44f2-96ac-6f09e07fd37a-host\") on node \"crc\" DevicePath \"\"" Feb 24 10:48:44 crc kubenswrapper[4755]: I0224 10:48:44.646388 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd7a7c6e-162b-44f2-96ac-6f09e07fd37a-kube-api-access-pzp44" (OuterVolumeSpecName: "kube-api-access-pzp44") pod "fd7a7c6e-162b-44f2-96ac-6f09e07fd37a" (UID: "fd7a7c6e-162b-44f2-96ac-6f09e07fd37a"). InnerVolumeSpecName "kube-api-access-pzp44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:48:44 crc kubenswrapper[4755]: I0224 10:48:44.737185 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzp44\" (UniqueName: \"kubernetes.io/projected/fd7a7c6e-162b-44f2-96ac-6f09e07fd37a-kube-api-access-pzp44\") on node \"crc\" DevicePath \"\"" Feb 24 10:48:44 crc kubenswrapper[4755]: I0224 10:48:44.780776 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37310: no serving certificate available for the kubelet" Feb 24 10:48:45 crc kubenswrapper[4755]: I0224 10:48:45.335518 4755 generic.go:334] "Generic (PLEG): container finished" podID="fd7a7c6e-162b-44f2-96ac-6f09e07fd37a" containerID="048ef061b7a1b7567b1e327ac9d3a5dca130511df6606204c035980f6f52795e" exitCode=143 Feb 24 10:48:45 crc kubenswrapper[4755]: I0224 10:48:45.335615 4755 scope.go:117] "RemoveContainer" containerID="048ef061b7a1b7567b1e327ac9d3a5dca130511df6606204c035980f6f52795e" Feb 24 10:48:45 crc kubenswrapper[4755]: I0224 10:48:45.335851 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcvqm/crc-debug-6lkbp" Feb 24 10:48:45 crc kubenswrapper[4755]: I0224 10:48:45.368650 4755 scope.go:117] "RemoveContainer" containerID="048ef061b7a1b7567b1e327ac9d3a5dca130511df6606204c035980f6f52795e" Feb 24 10:48:45 crc kubenswrapper[4755]: E0224 10:48:45.369945 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"048ef061b7a1b7567b1e327ac9d3a5dca130511df6606204c035980f6f52795e\": container with ID starting with 048ef061b7a1b7567b1e327ac9d3a5dca130511df6606204c035980f6f52795e not found: ID does not exist" containerID="048ef061b7a1b7567b1e327ac9d3a5dca130511df6606204c035980f6f52795e" Feb 24 10:48:45 crc kubenswrapper[4755]: I0224 10:48:45.369976 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048ef061b7a1b7567b1e327ac9d3a5dca130511df6606204c035980f6f52795e"} err="failed to get container status \"048ef061b7a1b7567b1e327ac9d3a5dca130511df6606204c035980f6f52795e\": rpc error: code = NotFound desc = could not find container \"048ef061b7a1b7567b1e327ac9d3a5dca130511df6606204c035980f6f52795e\": container with ID starting with 048ef061b7a1b7567b1e327ac9d3a5dca130511df6606204c035980f6f52795e not found: ID does not exist" Feb 24 10:48:45 crc kubenswrapper[4755]: I0224 10:48:45.521115 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37314: no serving certificate available for the kubelet" Feb 24 10:48:46 crc kubenswrapper[4755]: I0224 10:48:46.320913 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:48:46 crc kubenswrapper[4755]: E0224 10:48:46.321222 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:48:46 crc kubenswrapper[4755]: I0224 10:48:46.331081 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd7a7c6e-162b-44f2-96ac-6f09e07fd37a" path="/var/lib/kubelet/pods/fd7a7c6e-162b-44f2-96ac-6f09e07fd37a/volumes" Feb 24 10:48:47 crc kubenswrapper[4755]: I0224 10:48:47.837586 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37330: no serving certificate available for the kubelet" Feb 24 10:48:48 crc kubenswrapper[4755]: I0224 10:48:48.557074 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37338: no serving certificate available for the kubelet" Feb 24 10:48:50 crc kubenswrapper[4755]: I0224 10:48:50.875783 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37352: no serving certificate available for the kubelet" Feb 24 10:48:51 crc kubenswrapper[4755]: I0224 10:48:51.596961 4755 ???:1] "http: TLS handshake error from 192.168.126.11:37368: no serving certificate available for the kubelet" Feb 24 10:48:53 crc kubenswrapper[4755]: I0224 10:48:53.908694 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48818: no serving certificate available for the kubelet" Feb 24 10:48:54 crc kubenswrapper[4755]: I0224 10:48:54.640156 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48826: no serving certificate available for the kubelet" Feb 24 10:48:56 crc kubenswrapper[4755]: I0224 10:48:56.956699 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48832: no serving certificate available for the kubelet" Feb 24 10:48:57 crc kubenswrapper[4755]: I0224 10:48:57.674425 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48836: no serving certificate available for the kubelet" Feb 24 10:48:59 crc kubenswrapper[4755]: I0224 10:48:59.316306 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:48:59 crc kubenswrapper[4755]: E0224 10:48:59.317023 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:49:00 crc kubenswrapper[4755]: I0224 10:49:00.014358 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48848: no serving certificate available for the kubelet" Feb 24 10:49:00 crc kubenswrapper[4755]: I0224 10:49:00.791651 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48862: no serving certificate available for the kubelet" Feb 24 10:49:03 crc kubenswrapper[4755]: I0224 10:49:03.067004 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48878: no serving certificate available for the kubelet" Feb 24 10:49:03 crc kubenswrapper[4755]: I0224 10:49:03.849239 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54096: no serving certificate available for the kubelet" Feb 24 10:49:06 crc kubenswrapper[4755]: I0224 10:49:06.113808 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54110: no serving certificate available for the kubelet" Feb 24 10:49:06 crc kubenswrapper[4755]: I0224 10:49:06.925671 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54116: no serving certificate available for the kubelet" Feb 24 10:49:09 crc kubenswrapper[4755]: I0224 10:49:09.172276 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54120: no serving certificate available for the kubelet" Feb 24 10:49:09 crc kubenswrapper[4755]: I0224 10:49:09.990162 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54126: no serving certificate available for the kubelet" Feb 24 10:49:12 crc kubenswrapper[4755]: I0224 10:49:12.233787 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54136: no serving certificate available for the kubelet" Feb 24 10:49:13 crc kubenswrapper[4755]: I0224 10:49:13.024200 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54144: no serving certificate available for the kubelet" Feb 24 10:49:14 crc kubenswrapper[4755]: I0224 10:49:14.317089 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:49:14 crc kubenswrapper[4755]: E0224 10:49:14.317526 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:49:15 crc kubenswrapper[4755]: I0224 10:49:15.277093 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44356: no serving certificate available for the kubelet" Feb 24 10:49:16 crc kubenswrapper[4755]: I0224 10:49:16.066810 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44368: no serving certificate available for the kubelet" Feb 24 10:49:18 crc kubenswrapper[4755]: I0224 10:49:18.323381 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44382: no serving certificate available for the kubelet" Feb 24 10:49:19 crc kubenswrapper[4755]: I0224 10:49:19.103769 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44390: no serving certificate available for the kubelet" Feb 24 10:49:21 crc kubenswrapper[4755]: I0224 10:49:21.379863 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44406: no serving certificate available for the kubelet" Feb 24 10:49:22 crc kubenswrapper[4755]: I0224 10:49:22.137383 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44418: no serving certificate available for the kubelet" Feb 24 10:49:24 crc kubenswrapper[4755]: I0224 10:49:24.417579 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45652: no serving certificate available for the kubelet" Feb 24 10:49:25 crc kubenswrapper[4755]: I0224 10:49:25.178231 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45656: no serving certificate available for the kubelet" Feb 24 10:49:27 crc kubenswrapper[4755]: I0224 10:49:27.093008 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45658: no serving certificate available for the kubelet" Feb 24 10:49:27 crc kubenswrapper[4755]: I0224 10:49:27.266101 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45668: no serving certificate available for the kubelet" Feb 24 10:49:27 crc kubenswrapper[4755]: I0224 10:49:27.268201 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45670: no serving certificate available for the kubelet" Feb 24 10:49:27 crc kubenswrapper[4755]: I0224 10:49:27.393421 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45684: no serving certificate available for the kubelet" Feb 24 10:49:27 crc kubenswrapper[4755]: I0224 10:49:27.458140 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45700: no serving certificate available for the kubelet" Feb 24 10:49:27 crc kubenswrapper[4755]: I0224 10:49:27.483313 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45714: no serving certificate available for the kubelet" Feb 24 10:49:27 crc kubenswrapper[4755]: I0224 10:49:27.626042 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45728: no serving certificate available for the kubelet" Feb 24 10:49:27 crc kubenswrapper[4755]: I0224 10:49:27.791928 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45736: no serving certificate available for the kubelet" Feb 24 10:49:27 crc kubenswrapper[4755]: I0224 10:49:27.815411 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45744: no serving certificate available for the kubelet" Feb 24 10:49:27 crc kubenswrapper[4755]: I0224 10:49:27.892929 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45754: no serving certificate available for the kubelet" Feb 24 10:49:28 crc kubenswrapper[4755]: I0224 10:49:28.009809 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45756: no serving certificate available for the kubelet" Feb 24 10:49:28 crc kubenswrapper[4755]: I0224 10:49:28.164883 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45764: no serving certificate available for the kubelet" Feb 24 10:49:28 crc kubenswrapper[4755]: I0224 10:49:28.167809 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45768: no serving certificate available for the kubelet" Feb 24 10:49:28 crc kubenswrapper[4755]: I0224 10:49:28.197781 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45772: no serving certificate available for the kubelet" Feb 24 10:49:28 crc kubenswrapper[4755]: I0224 10:49:28.214681 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45776: no serving certificate available for the kubelet" Feb 24 10:49:28 crc kubenswrapper[4755]: I0224 10:49:28.357626 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45786: no serving certificate available for the kubelet" Feb 24 10:49:28 crc kubenswrapper[4755]: I0224 10:49:28.391537 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45788: no serving certificate available for the kubelet" Feb 24 10:49:28 crc kubenswrapper[4755]: I0224 10:49:28.535709 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45794: no serving certificate available for the kubelet" Feb 24 10:49:28 crc kubenswrapper[4755]: I0224 10:49:28.714057 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45798: no serving certificate available for the kubelet" Feb 24 10:49:28 crc kubenswrapper[4755]: I0224 10:49:28.738696 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45806: no serving certificate available for the kubelet" Feb 24 10:49:28 crc kubenswrapper[4755]: I0224 10:49:28.775850 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45814: no serving certificate available for the kubelet" Feb 24 10:49:28 crc kubenswrapper[4755]: I0224 10:49:28.905127 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45824: no serving certificate available for the kubelet" Feb 24 10:49:28 crc kubenswrapper[4755]: I0224 10:49:28.948272 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45828: no serving certificate available for the kubelet" Feb 24 10:49:28 crc kubenswrapper[4755]: I0224 10:49:28.954369 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45840: no serving certificate available for the kubelet" Feb 24 10:49:29 crc kubenswrapper[4755]: I0224 10:49:29.083182 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45850: no serving certificate available for the kubelet" Feb 24 10:49:29 crc kubenswrapper[4755]: I0224 10:49:29.106039 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45860: no serving certificate available for the kubelet" Feb 24 10:49:29 crc kubenswrapper[4755]: I0224 10:49:29.186163 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45864: no serving certificate available for the kubelet" Feb 24 10:49:29 crc kubenswrapper[4755]: I0224 10:49:29.304922 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45870: no serving certificate available for the kubelet" Feb 24 10:49:29 crc kubenswrapper[4755]: I0224 10:49:29.317059 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:49:29 crc kubenswrapper[4755]: E0224 10:49:29.317346 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:49:29 crc kubenswrapper[4755]: I0224 10:49:29.472893 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45886: no serving certificate available for the kubelet" Feb 24 10:49:29 crc kubenswrapper[4755]: I0224 10:49:29.493433 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45902: no serving certificate available for the kubelet" Feb 24 10:49:29 crc kubenswrapper[4755]: I0224 10:49:29.497638 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45904: no serving certificate available for the kubelet" Feb 24 10:49:29 crc kubenswrapper[4755]: I0224 10:49:29.722363 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45912: no serving certificate available for the kubelet" Feb 24 10:49:29 crc kubenswrapper[4755]: I0224 10:49:29.724455 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45920: no serving certificate available for the kubelet" Feb 24 10:49:29 crc kubenswrapper[4755]: I0224 10:49:29.735507 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45922: no serving certificate available for the kubelet" Feb 24 10:49:29 crc kubenswrapper[4755]: I0224 10:49:29.948373 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45936: no serving certificate available for the kubelet" Feb 24 10:49:29 crc kubenswrapper[4755]: I0224 10:49:29.948558 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45952: no serving certificate available for the kubelet" Feb 24 10:49:29 crc kubenswrapper[4755]: I0224 10:49:29.981402 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45962: no serving certificate available for the kubelet" Feb 24 10:49:30 crc kubenswrapper[4755]: I0224 10:49:30.102455 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45972: no serving certificate available for the kubelet" Feb 24 10:49:30 crc kubenswrapper[4755]: I0224 10:49:30.330548 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45974: no serving certificate available for the kubelet" Feb 24 10:49:30 crc kubenswrapper[4755]: I0224 10:49:30.341831 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45986: no serving certificate available for the kubelet" Feb 24 10:49:30 crc kubenswrapper[4755]: I0224 10:49:30.356955 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45990: no serving certificate available for the kubelet" Feb 24 10:49:30 crc kubenswrapper[4755]: I0224 10:49:30.402336 4755 ???:1] "http: TLS handshake error from 192.168.126.11:45994: no serving certificate available for the kubelet" Feb 24 10:49:30 crc kubenswrapper[4755]: I0224 10:49:30.503857 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46010: no serving certificate available for the kubelet" Feb 24 10:49:30 crc kubenswrapper[4755]: I0224 10:49:30.515235 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46024: no serving certificate available for the kubelet" Feb 24 10:49:30 crc kubenswrapper[4755]: I0224 10:49:30.539824 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46026: no serving certificate available for the kubelet" Feb 24 10:49:30 crc kubenswrapper[4755]: I0224 10:49:30.547145 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46040: no serving certificate available for the kubelet" Feb 24 10:49:30 crc kubenswrapper[4755]: I0224 10:49:30.631439 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46050: no serving certificate available for the kubelet" Feb 24 10:49:30 crc kubenswrapper[4755]: I0224 10:49:30.675014 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46066: no serving certificate available for the kubelet" Feb 24 10:49:30 crc kubenswrapper[4755]: I0224 10:49:30.749675 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46080: no serving certificate available for the kubelet" Feb 24 10:49:30 crc kubenswrapper[4755]: I0224 10:49:30.799600 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46082: no serving certificate available for the kubelet" Feb 24 10:49:31 crc kubenswrapper[4755]: I0224 10:49:31.255653 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46096: no serving certificate available for the kubelet" Feb 24 10:49:33 crc kubenswrapper[4755]: I0224 10:49:33.552515 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46106: no serving certificate available for the kubelet" Feb 24 10:49:34 crc kubenswrapper[4755]: I0224 10:49:34.298638 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53420: no serving certificate available for the kubelet" Feb 24 10:49:36 crc kubenswrapper[4755]: I0224 10:49:36.589985 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53426: no serving certificate available for the kubelet" Feb 24 10:49:37 crc kubenswrapper[4755]: I0224 10:49:37.352588 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53430: no serving certificate available for the kubelet" Feb 24 10:49:39 crc kubenswrapper[4755]: I0224 10:49:39.621732 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53446: no serving certificate available for the kubelet" Feb 24 10:49:40 crc kubenswrapper[4755]: I0224 10:49:40.387432 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53454: no serving certificate available for the kubelet" Feb 24 10:49:42 crc kubenswrapper[4755]: I0224 10:49:42.316169 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:49:42 crc kubenswrapper[4755]: E0224 10:49:42.316747 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:49:42 crc kubenswrapper[4755]: I0224 10:49:42.656268 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53468: no serving certificate available for the kubelet" Feb 24 10:49:43 crc kubenswrapper[4755]: I0224 10:49:43.456156 4755 ???:1] "http: TLS handshake error from 192.168.126.11:53480: no serving certificate available for the kubelet" Feb 24 10:49:45 crc kubenswrapper[4755]: I0224 10:49:45.714027 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46476: no serving certificate available for the kubelet" Feb 24 10:49:46 crc kubenswrapper[4755]: I0224 10:49:46.502511 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46480: no serving certificate available for the kubelet" Feb 24 10:49:47 crc kubenswrapper[4755]: I0224 10:49:47.666422 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46482: no serving certificate available for the kubelet" Feb 24 10:49:47 crc kubenswrapper[4755]: I0224 10:49:47.900668 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46492: no serving certificate available for the kubelet" Feb 24 10:49:47 crc kubenswrapper[4755]: I0224 10:49:47.911514 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46508: no serving certificate available for the kubelet" Feb 24 10:49:47 crc kubenswrapper[4755]: I0224 10:49:47.920119 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46510: no serving certificate available for the kubelet" Feb 24 10:49:48 crc kubenswrapper[4755]: I0224 10:49:48.116233 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46512: no serving certificate available for the kubelet" Feb 24 10:49:48 crc kubenswrapper[4755]: I0224 10:49:48.123552 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46514: no serving certificate available for the kubelet" Feb 24 10:49:48 crc kubenswrapper[4755]: I0224 10:49:48.135415 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46518: no serving certificate available for the kubelet" Feb 24 10:49:48 crc kubenswrapper[4755]: I0224 10:49:48.273552 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46530: no serving certificate available for the kubelet" Feb 24 10:49:48 crc kubenswrapper[4755]: I0224 10:49:48.310935 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46538: no serving certificate available for the kubelet" Feb 24 10:49:48 crc kubenswrapper[4755]: I0224 10:49:48.455888 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46544: no serving certificate available for the kubelet" Feb 24 10:49:48 crc kubenswrapper[4755]: I0224 10:49:48.528697 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46558: no serving certificate available for the kubelet" Feb 24 10:49:48 crc kubenswrapper[4755]: I0224 10:49:48.646387 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46560: no serving certificate available for the kubelet" Feb 24 10:49:48 crc kubenswrapper[4755]: I0224 10:49:48.756922 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46568: no serving certificate available for the kubelet" Feb 24 10:49:48 crc kubenswrapper[4755]: I0224 10:49:48.757109 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46580: no serving certificate available for the kubelet" Feb 24 10:49:48 crc kubenswrapper[4755]: I0224 10:49:48.855051 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46582: no serving certificate available for the kubelet" Feb 24 10:49:48 crc kubenswrapper[4755]: I0224 10:49:48.974232 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46596: no serving certificate available for the kubelet" Feb 24 10:49:49 crc kubenswrapper[4755]: I0224 10:49:49.038475 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46598: no serving certificate available for the kubelet" Feb 24 10:49:49 crc kubenswrapper[4755]: I0224 10:49:49.178022 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46612: no serving certificate available for the kubelet" Feb 24 10:49:49 crc kubenswrapper[4755]: I0224 10:49:49.304879 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46614: no serving certificate available for the kubelet" Feb 24 10:49:49 crc kubenswrapper[4755]: I0224 10:49:49.362456 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46630: no serving certificate available for the kubelet" Feb 24 10:49:49 crc kubenswrapper[4755]: I0224 10:49:49.459556 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46644: no serving certificate available for the kubelet" Feb 24 10:49:49 crc kubenswrapper[4755]: I0224 10:49:49.540129 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46648: no serving certificate available for the kubelet" Feb 24 10:49:49 crc kubenswrapper[4755]: I0224 10:49:49.567565 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46654: no serving certificate available for the kubelet" Feb 24 10:49:49 crc kubenswrapper[4755]: I0224 10:49:49.704039 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46660: no serving certificate available for the kubelet" Feb 24 10:49:49 crc kubenswrapper[4755]: I0224 10:49:49.854217 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46668: no serving certificate available for the kubelet" Feb 24 10:49:49 crc kubenswrapper[4755]: I0224 10:49:49.942807 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46680: no serving certificate available for the kubelet" Feb 24 10:49:50 crc kubenswrapper[4755]: I0224 10:49:50.080271 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46688: no serving certificate available for the kubelet" Feb 24 10:49:50 crc kubenswrapper[4755]: I0224 10:49:50.177581 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46692: no serving certificate available for the kubelet" Feb 24 10:49:50 crc kubenswrapper[4755]: I0224 10:49:50.253020 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46704: no serving certificate available for the kubelet" Feb 24 10:49:50 crc kubenswrapper[4755]: I0224 10:49:50.367266 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46708: no serving certificate available for the kubelet" Feb 24 10:49:50 crc kubenswrapper[4755]: I0224 10:49:50.459608 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46712: no serving certificate available for the kubelet" Feb 24 10:49:50 crc kubenswrapper[4755]: I0224 10:49:50.566339 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46728: no serving certificate available for the kubelet" Feb 24 10:49:50 crc kubenswrapper[4755]: I0224 10:49:50.622724 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46740: no serving certificate available for the kubelet" Feb 24 10:49:50 crc kubenswrapper[4755]: I0224 10:49:50.786858 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46754: no serving certificate available for the kubelet" Feb 24 10:49:51 crc kubenswrapper[4755]: I0224 10:49:51.801869 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46756: no serving certificate available for the kubelet" Feb 24 10:49:52 crc kubenswrapper[4755]: I0224 10:49:52.580485 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46764: no serving certificate available for the kubelet" Feb 24 10:49:54 crc kubenswrapper[4755]: I0224 10:49:54.837509 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52920: no serving certificate available for the kubelet" Feb 24 10:49:55 crc kubenswrapper[4755]: I0224 10:49:55.616303 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52936: no serving certificate available for the kubelet" Feb 24 10:49:57 crc kubenswrapper[4755]: I0224 10:49:57.316832 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:49:57 crc kubenswrapper[4755]: E0224 10:49:57.317676 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:49:57 crc kubenswrapper[4755]: I0224 10:49:57.874841 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52950: no serving certificate available for the kubelet" Feb 24 10:49:58 crc kubenswrapper[4755]: I0224 10:49:58.625936 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52966: no serving certificate available for the kubelet" Feb 24 10:49:58 crc kubenswrapper[4755]: I0224 10:49:58.654188 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52976: no serving certificate available for the kubelet" Feb 24 10:50:00 crc kubenswrapper[4755]: I0224 10:50:00.923479 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52986: no serving certificate available for the kubelet" Feb 24 10:50:01 crc kubenswrapper[4755]: I0224 10:50:01.689905 4755 ???:1] "http: TLS handshake error from 192.168.126.11:52992: no serving certificate available for the kubelet" Feb 24 10:50:03 crc kubenswrapper[4755]: I0224 10:50:03.963037 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60054: no serving certificate available for the kubelet" Feb 24 10:50:04 crc kubenswrapper[4755]: I0224 10:50:04.743157 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60058: no serving certificate available for the kubelet" Feb 24 10:50:07 crc kubenswrapper[4755]: I0224 10:50:07.011163 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60074: no serving certificate available for the kubelet" Feb 24 10:50:07 crc kubenswrapper[4755]: I0224 10:50:07.789709 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60078: no serving certificate available for the kubelet" Feb 24 10:50:10 crc kubenswrapper[4755]: I0224 10:50:10.053816 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60088: no serving certificate available for the kubelet" Feb 24 10:50:10 crc kubenswrapper[4755]: I0224 10:50:10.833208 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60104: no serving certificate available for the kubelet" Feb 24 10:50:11 crc kubenswrapper[4755]: I0224 10:50:11.163163 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60112: no serving certificate available for the kubelet" Feb 24 10:50:11 crc kubenswrapper[4755]: I0224 10:50:11.338502 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60128: no serving certificate available for the kubelet" Feb 24 10:50:11 crc kubenswrapper[4755]: I0224 10:50:11.354671 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60136: no serving certificate available for the kubelet" Feb 24 10:50:12 crc kubenswrapper[4755]: I0224 10:50:12.316635 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:50:12 crc kubenswrapper[4755]: E0224 10:50:12.316907 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:50:13 crc kubenswrapper[4755]: I0224 10:50:13.105699 4755 ???:1] "http: TLS handshake error from 192.168.126.11:60148: no serving certificate available for the kubelet" Feb 24 10:50:13 crc kubenswrapper[4755]: I0224 10:50:13.871153 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58508: no serving certificate available for the kubelet" Feb 24 10:50:16 crc kubenswrapper[4755]: I0224 10:50:16.146626 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58510: no serving certificate available for the kubelet" Feb 24 10:50:16 crc kubenswrapper[4755]: I0224 10:50:16.922535 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58526: no serving certificate available for the kubelet" Feb 24 10:50:19 crc kubenswrapper[4755]: I0224 10:50:19.192122 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58534: no serving certificate available for the kubelet" Feb 24 10:50:19 crc kubenswrapper[4755]: I0224 10:50:19.979406 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58550: no serving certificate available for the kubelet" Feb 24 10:50:22 crc kubenswrapper[4755]: I0224 10:50:22.243614 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58560: no serving certificate available for the kubelet" Feb 24 10:50:23 crc kubenswrapper[4755]: I0224 10:50:23.019923 4755 ???:1] "http: TLS handshake error from 192.168.126.11:58568: no serving certificate available for the kubelet" Feb 24 10:50:25 crc kubenswrapper[4755]: I0224 10:50:25.158099 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46688: no serving certificate available for the kubelet" Feb 24 10:50:25 crc kubenswrapper[4755]: I0224 10:50:25.275822 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46694: no serving certificate available for the kubelet" Feb 24 10:50:25 crc kubenswrapper[4755]: I0224 10:50:25.374115 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46710: no serving certificate available for the kubelet" Feb 24 10:50:25 crc kubenswrapper[4755]: I0224 10:50:25.418252 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46714: no serving certificate available for the kubelet" Feb 24 10:50:26 crc kubenswrapper[4755]: I0224 10:50:26.055438 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46724: no serving certificate available for the kubelet" Feb 24 10:50:27 crc kubenswrapper[4755]: I0224 10:50:27.315948 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:50:27 crc kubenswrapper[4755]: E0224 10:50:27.316605 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:50:28 crc kubenswrapper[4755]: I0224 10:50:28.328970 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46736: no serving certificate available for the kubelet" Feb 24 10:50:29 crc kubenswrapper[4755]: I0224 10:50:29.132679 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46752: no serving certificate available for the kubelet" Feb 24 10:50:31 crc kubenswrapper[4755]: I0224 10:50:31.426322 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46768: no serving certificate available for the kubelet" Feb 24 10:50:32 crc kubenswrapper[4755]: I0224 10:50:32.197350 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46772: no serving certificate available for the kubelet" Feb 24 10:50:34 crc kubenswrapper[4755]: I0224 10:50:34.466036 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39392: no serving certificate available for the kubelet" Feb 24 10:50:35 crc kubenswrapper[4755]: I0224 10:50:35.246478 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39408: no serving certificate available for the kubelet" Feb 24 10:50:37 crc kubenswrapper[4755]: I0224 10:50:37.538180 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39416: no serving certificate available for the kubelet" Feb 24 10:50:38 crc kubenswrapper[4755]: I0224 10:50:38.285024 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39424: no serving certificate available for the kubelet" Feb 24 10:50:39 crc kubenswrapper[4755]: I0224 10:50:39.131941 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39440: no serving certificate available for the kubelet" Feb 24 10:50:39 crc kubenswrapper[4755]: I0224 10:50:39.243992 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39452: no serving certificate available for the kubelet" Feb 24 10:50:39 crc kubenswrapper[4755]: I0224 10:50:39.277695 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39466: no serving certificate available for the kubelet" Feb 24 10:50:39 crc kubenswrapper[4755]: I0224 10:50:39.344324 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39476: no serving certificate available for the kubelet" Feb 24 10:50:39 crc kubenswrapper[4755]: I0224 10:50:39.429498 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39484: no serving certificate available for the kubelet" Feb 24 10:50:39 crc kubenswrapper[4755]: I0224 10:50:39.507374 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39490: no serving certificate available for the kubelet" Feb 24 10:50:40 crc kubenswrapper[4755]: I0224 10:50:40.575680 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39498: no serving certificate available for the kubelet" Feb 24 10:50:41 crc kubenswrapper[4755]: I0224 10:50:41.330250 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39512: no serving certificate available for the kubelet" Feb 24 10:50:42 crc kubenswrapper[4755]: I0224 10:50:42.317223 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:50:42 crc kubenswrapper[4755]: E0224 10:50:42.317573 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:50:43 crc kubenswrapper[4755]: I0224 10:50:43.621876 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39526: no serving certificate available for the kubelet" Feb 24 10:50:44 crc kubenswrapper[4755]: I0224 10:50:44.397175 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36374: no serving certificate available for the kubelet" Feb 24 10:50:46 crc kubenswrapper[4755]: I0224 10:50:46.667567 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36390: no serving certificate available for the kubelet" Feb 24 10:50:47 crc kubenswrapper[4755]: I0224 10:50:47.444780 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36406: no serving certificate available for the kubelet" Feb 24 10:50:49 crc kubenswrapper[4755]: I0224 10:50:49.719704 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36414: no serving certificate available for the kubelet" Feb 24 10:50:50 crc kubenswrapper[4755]: I0224 10:50:50.488698 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36422: no serving certificate available for the kubelet" Feb 24 10:50:51 crc kubenswrapper[4755]: I0224 10:50:51.521465 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36438: no serving certificate available for the kubelet" Feb 24 10:50:52 crc kubenswrapper[4755]: I0224 10:50:52.757045 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36448: no serving certificate available for the kubelet" Feb 24 10:50:53 crc kubenswrapper[4755]: I0224 10:50:53.528894 4755 ???:1] "http: TLS handshake error from 192.168.126.11:36464: no serving certificate available for the kubelet" Feb 24 10:50:55 crc kubenswrapper[4755]: I0224 10:50:55.803919 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39542: no serving certificate available for the kubelet" Feb 24 10:50:56 crc kubenswrapper[4755]: I0224 10:50:56.563006 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39550: no serving certificate available for the kubelet" Feb 24 10:50:57 crc kubenswrapper[4755]: I0224 10:50:57.317319 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:50:57 crc kubenswrapper[4755]: E0224 10:50:57.317728 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:50:58 crc kubenswrapper[4755]: I0224 10:50:58.842647 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39552: no serving certificate available for the kubelet" Feb 24 10:50:59 crc kubenswrapper[4755]: I0224 10:50:59.609111 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39566: no serving certificate available for the kubelet" Feb 24 10:51:01 crc kubenswrapper[4755]: I0224 10:51:01.901635 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39576: no serving certificate available for the kubelet" Feb 24 10:51:02 crc kubenswrapper[4755]: I0224 10:51:02.675360 4755 ???:1] "http: TLS handshake error from 192.168.126.11:39592: no serving certificate available for the kubelet" Feb 24 10:51:05 crc kubenswrapper[4755]: I0224 10:51:05.006669 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54892: no serving certificate available for the kubelet" Feb 24 10:51:05 crc kubenswrapper[4755]: I0224 10:51:05.728461 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54904: no serving certificate available for the kubelet" Feb 24 10:51:07 crc kubenswrapper[4755]: I0224 10:51:07.604745 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54914: no serving certificate available for the kubelet" Feb 24 10:51:07 crc kubenswrapper[4755]: I0224 10:51:07.605173 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54926: no serving certificate available for the kubelet" Feb 24 10:51:07 crc kubenswrapper[4755]: I0224 10:51:07.771744 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54940: no serving certificate available for the kubelet" Feb 24 10:51:07 crc kubenswrapper[4755]: I0224 10:51:07.948115 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54956: no serving certificate available for the kubelet" Feb 24 10:51:07 crc kubenswrapper[4755]: I0224 10:51:07.959041 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54964: no serving certificate available for the kubelet" Feb 24 10:51:07 crc kubenswrapper[4755]: I0224 10:51:07.972253 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54974: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.014479 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54986: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.043735 4755 ???:1] "http: TLS handshake error from 192.168.126.11:54998: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.181423 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55014: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.188024 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55018: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.189866 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55030: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.246293 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55044: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.403562 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55060: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.412232 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55074: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.412872 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55082: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.442664 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55088: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.565959 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55100: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.595792 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55104: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.601382 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55114: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.632359 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55126: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.727554 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55142: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.767543 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55152: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.786621 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55166: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.935433 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55182: no serving certificate available for the kubelet" Feb 24 10:51:08 crc kubenswrapper[4755]: I0224 10:51:08.950018 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55196: no serving certificate available for the kubelet" Feb 24 10:51:09 crc kubenswrapper[4755]: I0224 10:51:09.127273 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55210: no serving certificate available for the kubelet" Feb 24 10:51:09 crc kubenswrapper[4755]: I0224 10:51:09.131359 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55214: no serving certificate available for the kubelet" Feb 24 10:51:11 crc kubenswrapper[4755]: I0224 10:51:11.090054 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55228: no serving certificate available for the kubelet" Feb 24 10:51:11 crc kubenswrapper[4755]: I0224 10:51:11.317188 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:51:11 crc kubenswrapper[4755]: E0224 10:51:11.317629 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:51:11 crc kubenswrapper[4755]: I0224 10:51:11.820963 4755 ???:1] "http: TLS handshake error from 192.168.126.11:55234: no serving certificate available for the kubelet" Feb 24 10:51:14 crc kubenswrapper[4755]: I0224 10:51:14.135617 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34684: no serving certificate available for the kubelet" Feb 24 10:51:14 crc kubenswrapper[4755]: I0224 10:51:14.873441 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34690: no serving certificate available for the kubelet" Feb 24 10:51:17 crc kubenswrapper[4755]: I0224 10:51:17.173295 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34700: no serving certificate available for the kubelet" Feb 24 10:51:17 crc kubenswrapper[4755]: I0224 10:51:17.917766 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34710: no serving certificate available for the kubelet" Feb 24 10:51:19 crc kubenswrapper[4755]: I0224 10:51:19.519906 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" probeResult="failure" output=< Feb 24 10:51:19 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:51:19 crc kubenswrapper[4755]: > Feb 24 10:51:19 crc kubenswrapper[4755]: I0224 10:51:19.520533 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:51:19 crc kubenswrapper[4755]: I0224 10:51:19.521669 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"143341025cc371f0627ece60e54135ae4e4726a34d9c2a806765f32736f60a34"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:51:19 crc kubenswrapper[4755]: I0224 10:51:19.619554 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerName="galera" containerID="cri-o://143341025cc371f0627ece60e54135ae4e4726a34d9c2a806765f32736f60a34" gracePeriod=30 Feb 24 10:51:19 crc kubenswrapper[4755]: E0224 10:51:19.745264 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(fe13802e-a28d-4e11-a315-c0ae66bf0e1c)\"" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" Feb 24 10:51:20 crc kubenswrapper[4755]: I0224 10:51:20.213787 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34714: no serving certificate available for the kubelet" Feb 24 10:51:20 crc kubenswrapper[4755]: I0224 10:51:20.615748 4755 generic.go:334] "Generic (PLEG): container finished" podID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" containerID="143341025cc371f0627ece60e54135ae4e4726a34d9c2a806765f32736f60a34" exitCode=143 Feb 24 10:51:20 crc kubenswrapper[4755]: I0224 10:51:20.615812 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe13802e-a28d-4e11-a315-c0ae66bf0e1c","Type":"ContainerDied","Data":"143341025cc371f0627ece60e54135ae4e4726a34d9c2a806765f32736f60a34"} Feb 24 10:51:20 crc kubenswrapper[4755]: I0224 10:51:20.616281 4755 scope.go:117] "RemoveContainer" containerID="3a66d8dae34f6f20bcc5933cf924596e95ac093c06c4a681a04907cdab739f53" Feb 24 10:51:20 crc kubenswrapper[4755]: I0224 10:51:20.617330 4755 scope.go:117] "RemoveContainer" containerID="143341025cc371f0627ece60e54135ae4e4726a34d9c2a806765f32736f60a34" Feb 24 10:51:20 crc kubenswrapper[4755]: E0224 10:51:20.617716 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(fe13802e-a28d-4e11-a315-c0ae66bf0e1c)\"" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" Feb 24 10:51:20 crc kubenswrapper[4755]: I0224 10:51:20.975231 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34730: no serving certificate available for the kubelet" Feb 24 10:51:22 crc kubenswrapper[4755]: I0224 10:51:22.978703 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" probeResult="failure" output=< Feb 24 10:51:22 crc kubenswrapper[4755]: waiting for gcomm URI Feb 24 10:51:22 crc kubenswrapper[4755]: > Feb 24 10:51:22 crc kubenswrapper[4755]: I0224 10:51:22.979813 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:51:22 crc kubenswrapper[4755]: I0224 10:51:22.980526 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"fb15228cab6b3b0ebbb161d141dbaf89fb26137ce9638882c45dd4e7edffc6ad"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed startup probe, will be restarted" Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.028277 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" containerName="galera" containerID="cri-o://fb15228cab6b3b0ebbb161d141dbaf89fb26137ce9638882c45dd4e7edffc6ad" gracePeriod=30 Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.102903 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34738: no serving certificate available for the kubelet" Feb 24 10:51:23 crc kubenswrapper[4755]: E0224 10:51:23.152043 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(2f320527-691f-48e9-a243-f60bc805da39)\"" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.250100 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34740: no serving certificate available for the kubelet" Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.292786 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34750: no serving certificate available for the kubelet" Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.328600 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34762: no serving certificate available for the kubelet" Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.345364 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34770: no serving certificate available for the kubelet" Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.494769 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34784: no serving certificate available for the kubelet" Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.509548 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34798: no serving certificate available for the kubelet" Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.531523 4755 ???:1] "http: TLS handshake error from 192.168.126.11:34812: no serving certificate available for the kubelet" Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.643629 4755 generic.go:334] "Generic (PLEG): container finished" podID="2f320527-691f-48e9-a243-f60bc805da39" containerID="fb15228cab6b3b0ebbb161d141dbaf89fb26137ce9638882c45dd4e7edffc6ad" exitCode=143 Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.643675 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2f320527-691f-48e9-a243-f60bc805da39","Type":"ContainerDied","Data":"fb15228cab6b3b0ebbb161d141dbaf89fb26137ce9638882c45dd4e7edffc6ad"} Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.643718 4755 scope.go:117] "RemoveContainer" containerID="ef4ca47fbdee8a817270c1b69e52b17aa4e27f34d8d6eed7da45026d0c6a003d" Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.644330 4755 scope.go:117] "RemoveContainer" containerID="fb15228cab6b3b0ebbb161d141dbaf89fb26137ce9638882c45dd4e7edffc6ad" Feb 24 10:51:23 crc kubenswrapper[4755]: E0224 10:51:23.644549 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(2f320527-691f-48e9-a243-f60bc805da39)\"" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.680965 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51508: no serving certificate available for the kubelet" Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.854310 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51520: no serving certificate available for the kubelet" Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.855170 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51546: no serving certificate available for the kubelet" Feb 24 10:51:23 crc kubenswrapper[4755]: I0224 10:51:23.855260 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51532: no serving certificate available for the kubelet" Feb 24 10:51:24 crc kubenswrapper[4755]: I0224 10:51:24.016663 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51562: no serving certificate available for the kubelet" Feb 24 10:51:24 crc kubenswrapper[4755]: I0224 10:51:24.047005 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51572: no serving certificate available for the kubelet" Feb 24 10:51:24 crc kubenswrapper[4755]: I0224 10:51:24.047985 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51584: no serving certificate available for the kubelet" Feb 24 10:51:24 crc kubenswrapper[4755]: I0224 10:51:24.084363 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51598: no serving certificate available for the kubelet" Feb 24 10:51:24 crc kubenswrapper[4755]: I0224 10:51:24.212007 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51608: no serving certificate available for the kubelet" Feb 24 10:51:24 crc kubenswrapper[4755]: I0224 10:51:24.383561 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51622: no serving certificate available for the kubelet" Feb 24 10:51:24 crc kubenswrapper[4755]: I0224 10:51:24.413364 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51634: no serving certificate available for the kubelet" Feb 24 10:51:24 crc kubenswrapper[4755]: I0224 10:51:24.439341 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51640: no serving certificate available for the kubelet" Feb 24 10:51:24 crc kubenswrapper[4755]: I0224 10:51:24.578620 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51648: no serving certificate available for the kubelet" Feb 24 10:51:24 crc kubenswrapper[4755]: I0224 10:51:24.591837 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51650: no serving certificate available for the kubelet" Feb 24 10:51:24 crc kubenswrapper[4755]: I0224 10:51:24.598073 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51662: no serving certificate available for the kubelet" Feb 24 10:51:24 crc kubenswrapper[4755]: I0224 10:51:24.753227 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51672: no serving certificate available for the kubelet" Feb 24 10:51:24 crc kubenswrapper[4755]: I0224 10:51:24.968582 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51678: no serving certificate available for the kubelet" Feb 24 10:51:24 crc kubenswrapper[4755]: I0224 10:51:24.979385 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51690: no serving certificate available for the kubelet" Feb 24 10:51:25 crc kubenswrapper[4755]: I0224 10:51:25.003839 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51698: no serving certificate available for the kubelet" Feb 24 10:51:25 crc kubenswrapper[4755]: I0224 10:51:25.159876 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51708: no serving certificate available for the kubelet" Feb 24 10:51:25 crc kubenswrapper[4755]: I0224 10:51:25.180985 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51716: no serving certificate available for the kubelet" Feb 24 10:51:25 crc kubenswrapper[4755]: I0224 10:51:25.266377 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51730: no serving certificate available for the kubelet" Feb 24 10:51:25 crc kubenswrapper[4755]: I0224 10:51:25.376100 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51746: no serving certificate available for the kubelet" Feb 24 10:51:25 crc kubenswrapper[4755]: I0224 10:51:25.525291 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51752: no serving certificate available for the kubelet" Feb 24 10:51:25 crc kubenswrapper[4755]: I0224 10:51:25.537605 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51758: no serving certificate available for the kubelet" Feb 24 10:51:25 crc kubenswrapper[4755]: I0224 10:51:25.549157 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51772: no serving certificate available for the kubelet" Feb 24 10:51:25 crc kubenswrapper[4755]: I0224 10:51:25.716507 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51780: no serving certificate available for the kubelet" Feb 24 10:51:25 crc kubenswrapper[4755]: I0224 10:51:25.716586 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51796: no serving certificate available for the kubelet" Feb 24 10:51:25 crc kubenswrapper[4755]: I0224 10:51:25.799584 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51798: no serving certificate available for the kubelet" Feb 24 10:51:25 crc kubenswrapper[4755]: I0224 10:51:25.891558 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51812: no serving certificate available for the kubelet" Feb 24 10:51:25 crc kubenswrapper[4755]: I0224 10:51:25.963257 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51822: no serving certificate available for the kubelet" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.156282 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51830: no serving certificate available for the kubelet" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.160142 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51842: no serving certificate available for the kubelet" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.174645 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51846: no serving certificate available for the kubelet" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.286243 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51848: no serving certificate available for the kubelet" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.324598 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:51:26 crc kubenswrapper[4755]: E0224 10:51:26.325114 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.364229 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51852: no serving certificate available for the kubelet" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.389821 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51862: no serving certificate available for the kubelet" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.419145 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51872: no serving certificate available for the kubelet" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.538625 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51884: no serving certificate available for the kubelet" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.711391 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51888: no serving certificate available for the kubelet" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.720902 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51896: no serving certificate available for the kubelet" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.771898 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51908: no serving certificate available for the kubelet" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.896441 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51918: no serving certificate available for the kubelet" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.898651 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51920: no serving certificate available for the kubelet" Feb 24 10:51:26 crc kubenswrapper[4755]: I0224 10:51:26.930110 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51928: no serving certificate available for the kubelet" Feb 24 10:51:27 crc kubenswrapper[4755]: I0224 10:51:27.055388 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51930: no serving certificate available for the kubelet" Feb 24 10:51:29 crc kubenswrapper[4755]: I0224 10:51:29.327959 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51934: no serving certificate available for the kubelet" Feb 24 10:51:29 crc kubenswrapper[4755]: I0224 10:51:29.887154 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 10:51:29 crc kubenswrapper[4755]: I0224 10:51:29.888230 4755 scope.go:117] "RemoveContainer" containerID="143341025cc371f0627ece60e54135ae4e4726a34d9c2a806765f32736f60a34" Feb 24 10:51:29 crc kubenswrapper[4755]: E0224 10:51:29.888609 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(fe13802e-a28d-4e11-a315-c0ae66bf0e1c)\"" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" Feb 24 10:51:30 crc kubenswrapper[4755]: I0224 10:51:30.106467 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51942: no serving certificate available for the kubelet" Feb 24 10:51:31 crc kubenswrapper[4755]: I0224 10:51:31.289681 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 10:51:31 crc kubenswrapper[4755]: I0224 10:51:31.290718 4755 scope.go:117] "RemoveContainer" containerID="fb15228cab6b3b0ebbb161d141dbaf89fb26137ce9638882c45dd4e7edffc6ad" Feb 24 10:51:31 crc kubenswrapper[4755]: E0224 10:51:31.291138 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(2f320527-691f-48e9-a243-f60bc805da39)\"" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" Feb 24 10:51:32 crc kubenswrapper[4755]: I0224 10:51:32.376224 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51948: no serving certificate available for the kubelet" Feb 24 10:51:33 crc kubenswrapper[4755]: I0224 10:51:33.154477 4755 ???:1] "http: TLS handshake error from 192.168.126.11:51956: no serving certificate available for the kubelet" Feb 24 10:51:33 crc kubenswrapper[4755]: I0224 10:51:33.838261 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59668: no serving certificate available for the kubelet" Feb 24 10:51:35 crc kubenswrapper[4755]: I0224 10:51:35.417626 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59682: no serving certificate available for the kubelet" Feb 24 10:51:36 crc kubenswrapper[4755]: I0224 10:51:36.213735 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59688: no serving certificate available for the kubelet" Feb 24 10:51:38 crc kubenswrapper[4755]: I0224 10:51:38.472256 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59694: no serving certificate available for the kubelet" Feb 24 10:51:39 crc kubenswrapper[4755]: I0224 10:51:39.267205 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59696: no serving certificate available for the kubelet" Feb 24 10:51:40 crc kubenswrapper[4755]: I0224 10:51:40.317645 4755 scope.go:117] "RemoveContainer" containerID="143341025cc371f0627ece60e54135ae4e4726a34d9c2a806765f32736f60a34" Feb 24 10:51:40 crc kubenswrapper[4755]: E0224 10:51:40.318285 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(fe13802e-a28d-4e11-a315-c0ae66bf0e1c)\"" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" Feb 24 10:51:41 crc kubenswrapper[4755]: I0224 10:51:41.316888 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:51:41 crc kubenswrapper[4755]: E0224 10:51:41.317521 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7ll_openshift-machine-config-operator(f6407399-185a-4b27-bd1d-d3816e43a0b5)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" podUID="f6407399-185a-4b27-bd1d-d3816e43a0b5" Feb 24 10:51:41 crc kubenswrapper[4755]: I0224 10:51:41.516871 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59698: no serving certificate available for the kubelet" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.199999 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bjclm"] Feb 24 10:51:42 crc kubenswrapper[4755]: E0224 10:51:42.200392 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd7a7c6e-162b-44f2-96ac-6f09e07fd37a" containerName="container-00" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.200415 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd7a7c6e-162b-44f2-96ac-6f09e07fd37a" containerName="container-00" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.200612 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd7a7c6e-162b-44f2-96ac-6f09e07fd37a" containerName="container-00" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.202020 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.217901 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjclm"] Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.285630 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/145e1021-ed89-4c1e-bf0d-83ec40600b8e-catalog-content\") pod \"redhat-marketplace-bjclm\" (UID: \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\") " pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.285709 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb78c\" (UniqueName: \"kubernetes.io/projected/145e1021-ed89-4c1e-bf0d-83ec40600b8e-kube-api-access-fb78c\") pod \"redhat-marketplace-bjclm\" (UID: \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\") " pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.286028 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/145e1021-ed89-4c1e-bf0d-83ec40600b8e-utilities\") pod \"redhat-marketplace-bjclm\" (UID: \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\") " pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.311724 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59714: no serving certificate available for the kubelet" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.387128 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/145e1021-ed89-4c1e-bf0d-83ec40600b8e-utilities\") pod \"redhat-marketplace-bjclm\" (UID: \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\") " pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.387191 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/145e1021-ed89-4c1e-bf0d-83ec40600b8e-catalog-content\") pod \"redhat-marketplace-bjclm\" (UID: \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\") " pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.387230 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb78c\" (UniqueName: \"kubernetes.io/projected/145e1021-ed89-4c1e-bf0d-83ec40600b8e-kube-api-access-fb78c\") pod \"redhat-marketplace-bjclm\" (UID: \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\") " pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.387654 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/145e1021-ed89-4c1e-bf0d-83ec40600b8e-utilities\") pod \"redhat-marketplace-bjclm\" (UID: \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\") " pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.387918 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/145e1021-ed89-4c1e-bf0d-83ec40600b8e-catalog-content\") pod \"redhat-marketplace-bjclm\" (UID: \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\") " pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.410043 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb78c\" (UniqueName: \"kubernetes.io/projected/145e1021-ed89-4c1e-bf0d-83ec40600b8e-kube-api-access-fb78c\") pod \"redhat-marketplace-bjclm\" (UID: \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\") " pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.520629 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:42 crc kubenswrapper[4755]: W0224 10:51:42.980451 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod145e1021_ed89_4c1e_bf0d_83ec40600b8e.slice/crio-d070137eb11724ba03991a267c52d3566b3a9d0ce67f8ddc81e415600560dbdf WatchSource:0}: Error finding container d070137eb11724ba03991a267c52d3566b3a9d0ce67f8ddc81e415600560dbdf: Status 404 returned error can't find the container with id d070137eb11724ba03991a267c52d3566b3a9d0ce67f8ddc81e415600560dbdf Feb 24 10:51:42 crc kubenswrapper[4755]: I0224 10:51:42.982012 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjclm"] Feb 24 10:51:43 crc kubenswrapper[4755]: I0224 10:51:43.807427 4755 generic.go:334] "Generic (PLEG): container finished" podID="145e1021-ed89-4c1e-bf0d-83ec40600b8e" containerID="76940a53486eba945b9c12bc50449b18de0a270ef81f8bc2bf1d327cfe1db4b4" exitCode=0 Feb 24 10:51:43 crc kubenswrapper[4755]: I0224 10:51:43.807495 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjclm" event={"ID":"145e1021-ed89-4c1e-bf0d-83ec40600b8e","Type":"ContainerDied","Data":"76940a53486eba945b9c12bc50449b18de0a270ef81f8bc2bf1d327cfe1db4b4"} Feb 24 10:51:43 crc kubenswrapper[4755]: I0224 10:51:43.809295 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjclm" event={"ID":"145e1021-ed89-4c1e-bf0d-83ec40600b8e","Type":"ContainerStarted","Data":"d070137eb11724ba03991a267c52d3566b3a9d0ce67f8ddc81e415600560dbdf"} Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.047817 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49932: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.061143 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49936: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.234162 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49948: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.247427 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49954: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.556141 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49970: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.644341 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49982: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.654478 4755 ???:1] "http: TLS handshake error from 192.168.126.11:49998: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.725456 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50014: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.739354 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50026: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.747825 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50028: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.750673 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50032: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.758610 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50036: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.764566 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50052: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.781327 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50064: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.790445 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50080: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.793053 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50096: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.803764 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50104: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.818213 4755 generic.go:334] "Generic (PLEG): container finished" podID="145e1021-ed89-4c1e-bf0d-83ec40600b8e" containerID="354b6dcaa096c7f695df67144ec34d4d71177aa58462c6a3274bab40c12ef5de" exitCode=0 Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.818257 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjclm" event={"ID":"145e1021-ed89-4c1e-bf0d-83ec40600b8e","Type":"ContainerDied","Data":"354b6dcaa096c7f695df67144ec34d4d71177aa58462c6a3274bab40c12ef5de"} Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.885242 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50112: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.898332 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50126: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.928000 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50130: no serving certificate available for the kubelet" Feb 24 10:51:44 crc kubenswrapper[4755]: I0224 10:51:44.939994 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50138: no serving certificate available for the kubelet" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.002486 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50150: no serving certificate available for the kubelet" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.005450 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50158: no serving certificate available for the kubelet" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.016941 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50170: no serving certificate available for the kubelet" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.017120 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50172: no serving certificate available for the kubelet" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.026020 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50188: no serving certificate available for the kubelet" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.036122 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50198: no serving certificate available for the kubelet" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.038996 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50204: no serving certificate available for the kubelet" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.046109 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50210: no serving certificate available for the kubelet" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.107690 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50218: no serving certificate available for the kubelet" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.118354 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50228: no serving certificate available for the kubelet" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.178660 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50234: no serving certificate available for the kubelet" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.189810 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50248: no serving certificate available for the kubelet" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.316853 4755 scope.go:117] "RemoveContainer" containerID="fb15228cab6b3b0ebbb161d141dbaf89fb26137ce9638882c45dd4e7edffc6ad" Feb 24 10:51:45 crc kubenswrapper[4755]: E0224 10:51:45.317037 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(2f320527-691f-48e9-a243-f60bc805da39)\"" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.373271 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50264: no serving certificate available for the kubelet" Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.827173 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjclm" event={"ID":"145e1021-ed89-4c1e-bf0d-83ec40600b8e","Type":"ContainerStarted","Data":"e24ecc210b3c29f609c46505dbaf3b80d8b5faf8b181ea9be0d0b1d211acf4a1"} Feb 24 10:51:45 crc kubenswrapper[4755]: I0224 10:51:45.850528 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bjclm" podStartSLOduration=2.413131249 podStartE2EDuration="3.850506435s" podCreationTimestamp="2026-02-24 10:51:42 +0000 UTC" firstStartedPulling="2026-02-24 10:51:43.81014516 +0000 UTC m=+3408.266667713" lastFinishedPulling="2026-02-24 10:51:45.247520336 +0000 UTC m=+3409.704042899" observedRunningTime="2026-02-24 10:51:45.845330926 +0000 UTC m=+3410.301853470" watchObservedRunningTime="2026-02-24 10:51:45.850506435 +0000 UTC m=+3410.307028978" Feb 24 10:51:47 crc kubenswrapper[4755]: I0224 10:51:47.598783 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50280: no serving certificate available for the kubelet" Feb 24 10:51:48 crc kubenswrapper[4755]: I0224 10:51:48.417426 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50286: no serving certificate available for the kubelet" Feb 24 10:51:50 crc kubenswrapper[4755]: I0224 10:51:50.656180 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50296: no serving certificate available for the kubelet" Feb 24 10:51:51 crc kubenswrapper[4755]: I0224 10:51:51.316996 4755 scope.go:117] "RemoveContainer" containerID="143341025cc371f0627ece60e54135ae4e4726a34d9c2a806765f32736f60a34" Feb 24 10:51:51 crc kubenswrapper[4755]: E0224 10:51:51.317520 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(fe13802e-a28d-4e11-a315-c0ae66bf0e1c)\"" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" Feb 24 10:51:51 crc kubenswrapper[4755]: I0224 10:51:51.487854 4755 ???:1] "http: TLS handshake error from 192.168.126.11:50312: no serving certificate available for the kubelet" Feb 24 10:51:52 crc kubenswrapper[4755]: I0224 10:51:52.522922 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:52 crc kubenswrapper[4755]: I0224 10:51:52.522968 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:52 crc kubenswrapper[4755]: I0224 10:51:52.595038 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:52 crc kubenswrapper[4755]: I0224 10:51:52.980622 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:53 crc kubenswrapper[4755]: I0224 10:51:53.055705 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjclm"] Feb 24 10:51:53 crc kubenswrapper[4755]: I0224 10:51:53.708793 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48030: no serving certificate available for the kubelet" Feb 24 10:51:54 crc kubenswrapper[4755]: I0224 10:51:54.316458 4755 scope.go:117] "RemoveContainer" containerID="1b27b1e10f30cee7e421b190d380cd87b61f3b8b2402328e54473657f8750258" Feb 24 10:51:54 crc kubenswrapper[4755]: I0224 10:51:54.536444 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48046: no serving certificate available for the kubelet" Feb 24 10:51:54 crc kubenswrapper[4755]: I0224 10:51:54.926779 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bjclm" podUID="145e1021-ed89-4c1e-bf0d-83ec40600b8e" containerName="registry-server" containerID="cri-o://e24ecc210b3c29f609c46505dbaf3b80d8b5faf8b181ea9be0d0b1d211acf4a1" gracePeriod=2 Feb 24 10:51:54 crc kubenswrapper[4755]: I0224 10:51:54.927287 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7ll" event={"ID":"f6407399-185a-4b27-bd1d-d3816e43a0b5","Type":"ContainerStarted","Data":"aab571635fea526d48f5661cef933a822135878ac3337873284533ac598a6c1a"} Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.546857 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.727157 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/145e1021-ed89-4c1e-bf0d-83ec40600b8e-utilities\") pod \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\" (UID: \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\") " Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.727496 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/145e1021-ed89-4c1e-bf0d-83ec40600b8e-catalog-content\") pod \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\" (UID: \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\") " Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.727596 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb78c\" (UniqueName: \"kubernetes.io/projected/145e1021-ed89-4c1e-bf0d-83ec40600b8e-kube-api-access-fb78c\") pod \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\" (UID: \"145e1021-ed89-4c1e-bf0d-83ec40600b8e\") " Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.728551 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/145e1021-ed89-4c1e-bf0d-83ec40600b8e-utilities" (OuterVolumeSpecName: "utilities") pod "145e1021-ed89-4c1e-bf0d-83ec40600b8e" (UID: "145e1021-ed89-4c1e-bf0d-83ec40600b8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.735609 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/145e1021-ed89-4c1e-bf0d-83ec40600b8e-kube-api-access-fb78c" (OuterVolumeSpecName: "kube-api-access-fb78c") pod "145e1021-ed89-4c1e-bf0d-83ec40600b8e" (UID: "145e1021-ed89-4c1e-bf0d-83ec40600b8e"). InnerVolumeSpecName "kube-api-access-fb78c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.771325 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/145e1021-ed89-4c1e-bf0d-83ec40600b8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "145e1021-ed89-4c1e-bf0d-83ec40600b8e" (UID: "145e1021-ed89-4c1e-bf0d-83ec40600b8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.829863 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb78c\" (UniqueName: \"kubernetes.io/projected/145e1021-ed89-4c1e-bf0d-83ec40600b8e-kube-api-access-fb78c\") on node \"crc\" DevicePath \"\"" Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.829924 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/145e1021-ed89-4c1e-bf0d-83ec40600b8e-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.829948 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/145e1021-ed89-4c1e-bf0d-83ec40600b8e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.962758 4755 generic.go:334] "Generic (PLEG): container finished" podID="145e1021-ed89-4c1e-bf0d-83ec40600b8e" containerID="e24ecc210b3c29f609c46505dbaf3b80d8b5faf8b181ea9be0d0b1d211acf4a1" exitCode=0 Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.962847 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjclm" event={"ID":"145e1021-ed89-4c1e-bf0d-83ec40600b8e","Type":"ContainerDied","Data":"e24ecc210b3c29f609c46505dbaf3b80d8b5faf8b181ea9be0d0b1d211acf4a1"} Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.962917 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjclm" event={"ID":"145e1021-ed89-4c1e-bf0d-83ec40600b8e","Type":"ContainerDied","Data":"d070137eb11724ba03991a267c52d3566b3a9d0ce67f8ddc81e415600560dbdf"} Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.962951 4755 scope.go:117] "RemoveContainer" containerID="e24ecc210b3c29f609c46505dbaf3b80d8b5faf8b181ea9be0d0b1d211acf4a1" Feb 24 10:51:55 crc kubenswrapper[4755]: I0224 10:51:55.963236 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjclm" Feb 24 10:51:56 crc kubenswrapper[4755]: I0224 10:51:56.010026 4755 scope.go:117] "RemoveContainer" containerID="354b6dcaa096c7f695df67144ec34d4d71177aa58462c6a3274bab40c12ef5de" Feb 24 10:51:56 crc kubenswrapper[4755]: I0224 10:51:56.012567 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjclm"] Feb 24 10:51:56 crc kubenswrapper[4755]: I0224 10:51:56.027808 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjclm"] Feb 24 10:51:56 crc kubenswrapper[4755]: I0224 10:51:56.047237 4755 scope.go:117] "RemoveContainer" containerID="76940a53486eba945b9c12bc50449b18de0a270ef81f8bc2bf1d327cfe1db4b4" Feb 24 10:51:56 crc kubenswrapper[4755]: I0224 10:51:56.090522 4755 scope.go:117] "RemoveContainer" containerID="e24ecc210b3c29f609c46505dbaf3b80d8b5faf8b181ea9be0d0b1d211acf4a1" Feb 24 10:51:56 crc kubenswrapper[4755]: E0224 10:51:56.091433 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e24ecc210b3c29f609c46505dbaf3b80d8b5faf8b181ea9be0d0b1d211acf4a1\": container with ID starting with e24ecc210b3c29f609c46505dbaf3b80d8b5faf8b181ea9be0d0b1d211acf4a1 not found: ID does not exist" containerID="e24ecc210b3c29f609c46505dbaf3b80d8b5faf8b181ea9be0d0b1d211acf4a1" Feb 24 10:51:56 crc kubenswrapper[4755]: I0224 10:51:56.091470 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e24ecc210b3c29f609c46505dbaf3b80d8b5faf8b181ea9be0d0b1d211acf4a1"} err="failed to get container status \"e24ecc210b3c29f609c46505dbaf3b80d8b5faf8b181ea9be0d0b1d211acf4a1\": rpc error: code = NotFound desc = could not find container \"e24ecc210b3c29f609c46505dbaf3b80d8b5faf8b181ea9be0d0b1d211acf4a1\": container with ID starting with e24ecc210b3c29f609c46505dbaf3b80d8b5faf8b181ea9be0d0b1d211acf4a1 not found: ID does not exist" Feb 24 10:51:56 crc kubenswrapper[4755]: I0224 10:51:56.091495 4755 scope.go:117] "RemoveContainer" containerID="354b6dcaa096c7f695df67144ec34d4d71177aa58462c6a3274bab40c12ef5de" Feb 24 10:51:56 crc kubenswrapper[4755]: E0224 10:51:56.092366 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"354b6dcaa096c7f695df67144ec34d4d71177aa58462c6a3274bab40c12ef5de\": container with ID starting with 354b6dcaa096c7f695df67144ec34d4d71177aa58462c6a3274bab40c12ef5de not found: ID does not exist" containerID="354b6dcaa096c7f695df67144ec34d4d71177aa58462c6a3274bab40c12ef5de" Feb 24 10:51:56 crc kubenswrapper[4755]: I0224 10:51:56.092411 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"354b6dcaa096c7f695df67144ec34d4d71177aa58462c6a3274bab40c12ef5de"} err="failed to get container status \"354b6dcaa096c7f695df67144ec34d4d71177aa58462c6a3274bab40c12ef5de\": rpc error: code = NotFound desc = could not find container \"354b6dcaa096c7f695df67144ec34d4d71177aa58462c6a3274bab40c12ef5de\": container with ID starting with 354b6dcaa096c7f695df67144ec34d4d71177aa58462c6a3274bab40c12ef5de not found: ID does not exist" Feb 24 10:51:56 crc kubenswrapper[4755]: I0224 10:51:56.092437 4755 scope.go:117] "RemoveContainer" containerID="76940a53486eba945b9c12bc50449b18de0a270ef81f8bc2bf1d327cfe1db4b4" Feb 24 10:51:56 crc kubenswrapper[4755]: E0224 10:51:56.092947 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76940a53486eba945b9c12bc50449b18de0a270ef81f8bc2bf1d327cfe1db4b4\": container with ID starting with 76940a53486eba945b9c12bc50449b18de0a270ef81f8bc2bf1d327cfe1db4b4 not found: ID does not exist" containerID="76940a53486eba945b9c12bc50449b18de0a270ef81f8bc2bf1d327cfe1db4b4" Feb 24 10:51:56 crc kubenswrapper[4755]: I0224 10:51:56.092972 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76940a53486eba945b9c12bc50449b18de0a270ef81f8bc2bf1d327cfe1db4b4"} err="failed to get container status \"76940a53486eba945b9c12bc50449b18de0a270ef81f8bc2bf1d327cfe1db4b4\": rpc error: code = NotFound desc = could not find container \"76940a53486eba945b9c12bc50449b18de0a270ef81f8bc2bf1d327cfe1db4b4\": container with ID starting with 76940a53486eba945b9c12bc50449b18de0a270ef81f8bc2bf1d327cfe1db4b4 not found: ID does not exist" Feb 24 10:51:56 crc kubenswrapper[4755]: I0224 10:51:56.330502 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="145e1021-ed89-4c1e-bf0d-83ec40600b8e" path="/var/lib/kubelet/pods/145e1021-ed89-4c1e-bf0d-83ec40600b8e/volumes" Feb 24 10:51:56 crc kubenswrapper[4755]: I0224 10:51:56.790355 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48052: no serving certificate available for the kubelet" Feb 24 10:51:57 crc kubenswrapper[4755]: I0224 10:51:57.316349 4755 scope.go:117] "RemoveContainer" containerID="fb15228cab6b3b0ebbb161d141dbaf89fb26137ce9638882c45dd4e7edffc6ad" Feb 24 10:51:57 crc kubenswrapper[4755]: E0224 10:51:57.317056 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(2f320527-691f-48e9-a243-f60bc805da39)\"" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" Feb 24 10:51:57 crc kubenswrapper[4755]: I0224 10:51:57.595487 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48056: no serving certificate available for the kubelet" Feb 24 10:51:59 crc kubenswrapper[4755]: I0224 10:51:59.896633 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48066: no serving certificate available for the kubelet" Feb 24 10:52:00 crc kubenswrapper[4755]: I0224 10:52:00.667110 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48068: no serving certificate available for the kubelet" Feb 24 10:52:02 crc kubenswrapper[4755]: I0224 10:52:02.952869 4755 ???:1] "http: TLS handshake error from 192.168.126.11:48080: no serving certificate available for the kubelet" Feb 24 10:52:03 crc kubenswrapper[4755]: I0224 10:52:03.714437 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43050: no serving certificate available for the kubelet" Feb 24 10:52:05 crc kubenswrapper[4755]: I0224 10:52:05.318200 4755 scope.go:117] "RemoveContainer" containerID="143341025cc371f0627ece60e54135ae4e4726a34d9c2a806765f32736f60a34" Feb 24 10:52:05 crc kubenswrapper[4755]: E0224 10:52:05.320294 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(fe13802e-a28d-4e11-a315-c0ae66bf0e1c)\"" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" Feb 24 10:52:05 crc kubenswrapper[4755]: I0224 10:52:05.997462 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43054: no serving certificate available for the kubelet" Feb 24 10:52:06 crc kubenswrapper[4755]: I0224 10:52:06.778127 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43064: no serving certificate available for the kubelet" Feb 24 10:52:09 crc kubenswrapper[4755]: I0224 10:52:09.036722 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43076: no serving certificate available for the kubelet" Feb 24 10:52:09 crc kubenswrapper[4755]: I0224 10:52:09.821577 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43088: no serving certificate available for the kubelet" Feb 24 10:52:12 crc kubenswrapper[4755]: I0224 10:52:12.096660 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43100: no serving certificate available for the kubelet" Feb 24 10:52:12 crc kubenswrapper[4755]: I0224 10:52:12.316692 4755 scope.go:117] "RemoveContainer" containerID="fb15228cab6b3b0ebbb161d141dbaf89fb26137ce9638882c45dd4e7edffc6ad" Feb 24 10:52:12 crc kubenswrapper[4755]: E0224 10:52:12.317144 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(2f320527-691f-48e9-a243-f60bc805da39)\"" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" Feb 24 10:52:12 crc kubenswrapper[4755]: I0224 10:52:12.873411 4755 ???:1] "http: TLS handshake error from 192.168.126.11:43102: no serving certificate available for the kubelet" Feb 24 10:52:15 crc kubenswrapper[4755]: I0224 10:52:15.160366 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38022: no serving certificate available for the kubelet" Feb 24 10:52:15 crc kubenswrapper[4755]: I0224 10:52:15.932651 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38026: no serving certificate available for the kubelet" Feb 24 10:52:18 crc kubenswrapper[4755]: I0224 10:52:18.191480 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38034: no serving certificate available for the kubelet" Feb 24 10:52:18 crc kubenswrapper[4755]: I0224 10:52:18.987683 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38038: no serving certificate available for the kubelet" Feb 24 10:52:19 crc kubenswrapper[4755]: I0224 10:52:19.316349 4755 scope.go:117] "RemoveContainer" containerID="143341025cc371f0627ece60e54135ae4e4726a34d9c2a806765f32736f60a34" Feb 24 10:52:19 crc kubenswrapper[4755]: E0224 10:52:19.316845 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(fe13802e-a28d-4e11-a315-c0ae66bf0e1c)\"" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" Feb 24 10:52:21 crc kubenswrapper[4755]: I0224 10:52:21.262865 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38042: no serving certificate available for the kubelet" Feb 24 10:52:22 crc kubenswrapper[4755]: I0224 10:52:22.043552 4755 ???:1] "http: TLS handshake error from 192.168.126.11:38046: no serving certificate available for the kubelet" Feb 24 10:52:24 crc kubenswrapper[4755]: I0224 10:52:24.312519 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44242: no serving certificate available for the kubelet" Feb 24 10:52:25 crc kubenswrapper[4755]: I0224 10:52:25.137626 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44254: no serving certificate available for the kubelet" Feb 24 10:52:25 crc kubenswrapper[4755]: I0224 10:52:25.317320 4755 scope.go:117] "RemoveContainer" containerID="fb15228cab6b3b0ebbb161d141dbaf89fb26137ce9638882c45dd4e7edffc6ad" Feb 24 10:52:25 crc kubenswrapper[4755]: E0224 10:52:25.317547 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(2f320527-691f-48e9-a243-f60bc805da39)\"" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" Feb 24 10:52:27 crc kubenswrapper[4755]: I0224 10:52:27.354884 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44266: no serving certificate available for the kubelet" Feb 24 10:52:28 crc kubenswrapper[4755]: I0224 10:52:28.188978 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44282: no serving certificate available for the kubelet" Feb 24 10:52:30 crc kubenswrapper[4755]: I0224 10:52:30.316897 4755 scope.go:117] "RemoveContainer" containerID="143341025cc371f0627ece60e54135ae4e4726a34d9c2a806765f32736f60a34" Feb 24 10:52:30 crc kubenswrapper[4755]: E0224 10:52:30.317600 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(fe13802e-a28d-4e11-a315-c0ae66bf0e1c)\"" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" Feb 24 10:52:30 crc kubenswrapper[4755]: I0224 10:52:30.392159 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44294: no serving certificate available for the kubelet" Feb 24 10:52:31 crc kubenswrapper[4755]: I0224 10:52:31.236438 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44306: no serving certificate available for the kubelet" Feb 24 10:52:33 crc kubenswrapper[4755]: I0224 10:52:33.440280 4755 ???:1] "http: TLS handshake error from 192.168.126.11:44318: no serving certificate available for the kubelet" Feb 24 10:52:34 crc kubenswrapper[4755]: I0224 10:52:34.334025 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42436: no serving certificate available for the kubelet" Feb 24 10:52:36 crc kubenswrapper[4755]: I0224 10:52:36.332382 4755 scope.go:117] "RemoveContainer" containerID="fb15228cab6b3b0ebbb161d141dbaf89fb26137ce9638882c45dd4e7edffc6ad" Feb 24 10:52:36 crc kubenswrapper[4755]: E0224 10:52:36.334119 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(2f320527-691f-48e9-a243-f60bc805da39)\"" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" Feb 24 10:52:36 crc kubenswrapper[4755]: I0224 10:52:36.483131 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42448: no serving certificate available for the kubelet" Feb 24 10:52:37 crc kubenswrapper[4755]: I0224 10:52:37.383853 4755 generic.go:334] "Generic (PLEG): container finished" podID="0c8dd3be-6e22-4240-a626-dc83b7a646d7" containerID="ac8ac232dbb29a435e990e1d3e301b5751c27f11041e8984ee6057f0cffdaede" exitCode=0 Feb 24 10:52:37 crc kubenswrapper[4755]: I0224 10:52:37.383993 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcvqm/must-gather-5ssns" event={"ID":"0c8dd3be-6e22-4240-a626-dc83b7a646d7","Type":"ContainerDied","Data":"ac8ac232dbb29a435e990e1d3e301b5751c27f11041e8984ee6057f0cffdaede"} Feb 24 10:52:37 crc kubenswrapper[4755]: I0224 10:52:37.388607 4755 scope.go:117] "RemoveContainer" containerID="ac8ac232dbb29a435e990e1d3e301b5751c27f11041e8984ee6057f0cffdaede" Feb 24 10:52:37 crc kubenswrapper[4755]: I0224 10:52:37.393149 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42462: no serving certificate available for the kubelet" Feb 24 10:52:39 crc kubenswrapper[4755]: I0224 10:52:39.526727 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42470: no serving certificate available for the kubelet" Feb 24 10:52:40 crc kubenswrapper[4755]: I0224 10:52:40.432012 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42480: no serving certificate available for the kubelet" Feb 24 10:52:40 crc kubenswrapper[4755]: I0224 10:52:40.586517 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42492: no serving certificate available for the kubelet" Feb 24 10:52:40 crc kubenswrapper[4755]: I0224 10:52:40.718825 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42508: no serving certificate available for the kubelet" Feb 24 10:52:40 crc kubenswrapper[4755]: I0224 10:52:40.728564 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42514: no serving certificate available for the kubelet" Feb 24 10:52:40 crc kubenswrapper[4755]: I0224 10:52:40.749785 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42522: no serving certificate available for the kubelet" Feb 24 10:52:40 crc kubenswrapper[4755]: I0224 10:52:40.759041 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42530: no serving certificate available for the kubelet" Feb 24 10:52:40 crc kubenswrapper[4755]: I0224 10:52:40.771410 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42538: no serving certificate available for the kubelet" Feb 24 10:52:40 crc kubenswrapper[4755]: I0224 10:52:40.783842 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42552: no serving certificate available for the kubelet" Feb 24 10:52:40 crc kubenswrapper[4755]: I0224 10:52:40.796569 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42556: no serving certificate available for the kubelet" Feb 24 10:52:40 crc kubenswrapper[4755]: I0224 10:52:40.808284 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42570: no serving certificate available for the kubelet" Feb 24 10:52:40 crc kubenswrapper[4755]: I0224 10:52:40.950735 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42582: no serving certificate available for the kubelet" Feb 24 10:52:40 crc kubenswrapper[4755]: I0224 10:52:40.967128 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42590: no serving certificate available for the kubelet" Feb 24 10:52:40 crc kubenswrapper[4755]: I0224 10:52:40.989624 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42602: no serving certificate available for the kubelet" Feb 24 10:52:41 crc kubenswrapper[4755]: I0224 10:52:41.003013 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42614: no serving certificate available for the kubelet" Feb 24 10:52:41 crc kubenswrapper[4755]: I0224 10:52:41.016254 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42622: no serving certificate available for the kubelet" Feb 24 10:52:41 crc kubenswrapper[4755]: I0224 10:52:41.027292 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42628: no serving certificate available for the kubelet" Feb 24 10:52:41 crc kubenswrapper[4755]: I0224 10:52:41.042169 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42632: no serving certificate available for the kubelet" Feb 24 10:52:41 crc kubenswrapper[4755]: I0224 10:52:41.051852 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42644: no serving certificate available for the kubelet" Feb 24 10:52:42 crc kubenswrapper[4755]: I0224 10:52:42.316879 4755 scope.go:117] "RemoveContainer" containerID="143341025cc371f0627ece60e54135ae4e4726a34d9c2a806765f32736f60a34" Feb 24 10:52:42 crc kubenswrapper[4755]: E0224 10:52:42.317629 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(fe13802e-a28d-4e11-a315-c0ae66bf0e1c)\"" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" Feb 24 10:52:42 crc kubenswrapper[4755]: I0224 10:52:42.572478 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42654: no serving certificate available for the kubelet" Feb 24 10:52:43 crc kubenswrapper[4755]: I0224 10:52:43.495275 4755 ???:1] "http: TLS handshake error from 192.168.126.11:42664: no serving certificate available for the kubelet" Feb 24 10:52:45 crc kubenswrapper[4755]: I0224 10:52:45.615000 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59142: no serving certificate available for the kubelet" Feb 24 10:52:46 crc kubenswrapper[4755]: I0224 10:52:46.530874 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59156: no serving certificate available for the kubelet" Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.014600 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rcvqm/must-gather-5ssns"] Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.015427 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rcvqm/must-gather-5ssns" podUID="0c8dd3be-6e22-4240-a626-dc83b7a646d7" containerName="copy" containerID="cri-o://b202ad9d05d75504bf6cc1655b12b19f4ccbf5b27bc3674fdc3c2f7916a993ba" gracePeriod=2 Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.028942 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rcvqm/must-gather-5ssns"] Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.469805 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rcvqm_must-gather-5ssns_0c8dd3be-6e22-4240-a626-dc83b7a646d7/copy/0.log" Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.481218 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcvqm/must-gather-5ssns" Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.485669 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rcvqm_must-gather-5ssns_0c8dd3be-6e22-4240-a626-dc83b7a646d7/copy/0.log" Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.485916 4755 generic.go:334] "Generic (PLEG): container finished" podID="0c8dd3be-6e22-4240-a626-dc83b7a646d7" containerID="b202ad9d05d75504bf6cc1655b12b19f4ccbf5b27bc3674fdc3c2f7916a993ba" exitCode=143 Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.485955 4755 scope.go:117] "RemoveContainer" containerID="b202ad9d05d75504bf6cc1655b12b19f4ccbf5b27bc3674fdc3c2f7916a993ba" Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.486099 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcvqm/must-gather-5ssns" Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.532521 4755 scope.go:117] "RemoveContainer" containerID="ac8ac232dbb29a435e990e1d3e301b5751c27f11041e8984ee6057f0cffdaede" Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.615223 4755 scope.go:117] "RemoveContainer" containerID="b202ad9d05d75504bf6cc1655b12b19f4ccbf5b27bc3674fdc3c2f7916a993ba" Feb 24 10:52:47 crc kubenswrapper[4755]: E0224 10:52:47.615996 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b202ad9d05d75504bf6cc1655b12b19f4ccbf5b27bc3674fdc3c2f7916a993ba\": container with ID starting with b202ad9d05d75504bf6cc1655b12b19f4ccbf5b27bc3674fdc3c2f7916a993ba not found: ID does not exist" containerID="b202ad9d05d75504bf6cc1655b12b19f4ccbf5b27bc3674fdc3c2f7916a993ba" Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.616054 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b202ad9d05d75504bf6cc1655b12b19f4ccbf5b27bc3674fdc3c2f7916a993ba"} err="failed to get container status \"b202ad9d05d75504bf6cc1655b12b19f4ccbf5b27bc3674fdc3c2f7916a993ba\": rpc error: code = NotFound desc = could not find container \"b202ad9d05d75504bf6cc1655b12b19f4ccbf5b27bc3674fdc3c2f7916a993ba\": container with ID starting with b202ad9d05d75504bf6cc1655b12b19f4ccbf5b27bc3674fdc3c2f7916a993ba not found: ID does not exist" Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.616109 4755 scope.go:117] "RemoveContainer" containerID="ac8ac232dbb29a435e990e1d3e301b5751c27f11041e8984ee6057f0cffdaede" Feb 24 10:52:47 crc kubenswrapper[4755]: E0224 10:52:47.617254 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac8ac232dbb29a435e990e1d3e301b5751c27f11041e8984ee6057f0cffdaede\": container with ID starting with ac8ac232dbb29a435e990e1d3e301b5751c27f11041e8984ee6057f0cffdaede not found: ID does not exist" containerID="ac8ac232dbb29a435e990e1d3e301b5751c27f11041e8984ee6057f0cffdaede" Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.617283 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac8ac232dbb29a435e990e1d3e301b5751c27f11041e8984ee6057f0cffdaede"} err="failed to get container status \"ac8ac232dbb29a435e990e1d3e301b5751c27f11041e8984ee6057f0cffdaede\": rpc error: code = NotFound desc = could not find container \"ac8ac232dbb29a435e990e1d3e301b5751c27f11041e8984ee6057f0cffdaede\": container with ID starting with ac8ac232dbb29a435e990e1d3e301b5751c27f11041e8984ee6057f0cffdaede not found: ID does not exist" Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.619245 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0c8dd3be-6e22-4240-a626-dc83b7a646d7-must-gather-output\") pod \"0c8dd3be-6e22-4240-a626-dc83b7a646d7\" (UID: \"0c8dd3be-6e22-4240-a626-dc83b7a646d7\") " Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.619293 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcw4l\" (UniqueName: \"kubernetes.io/projected/0c8dd3be-6e22-4240-a626-dc83b7a646d7-kube-api-access-fcw4l\") pod \"0c8dd3be-6e22-4240-a626-dc83b7a646d7\" (UID: \"0c8dd3be-6e22-4240-a626-dc83b7a646d7\") " Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.624686 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c8dd3be-6e22-4240-a626-dc83b7a646d7-kube-api-access-fcw4l" (OuterVolumeSpecName: "kube-api-access-fcw4l") pod "0c8dd3be-6e22-4240-a626-dc83b7a646d7" (UID: "0c8dd3be-6e22-4240-a626-dc83b7a646d7"). InnerVolumeSpecName "kube-api-access-fcw4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.714813 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c8dd3be-6e22-4240-a626-dc83b7a646d7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0c8dd3be-6e22-4240-a626-dc83b7a646d7" (UID: "0c8dd3be-6e22-4240-a626-dc83b7a646d7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.721297 4755 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0c8dd3be-6e22-4240-a626-dc83b7a646d7-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 24 10:52:47 crc kubenswrapper[4755]: I0224 10:52:47.721326 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcw4l\" (UniqueName: \"kubernetes.io/projected/0c8dd3be-6e22-4240-a626-dc83b7a646d7-kube-api-access-fcw4l\") on node \"crc\" DevicePath \"\"" Feb 24 10:52:48 crc kubenswrapper[4755]: I0224 10:52:48.328232 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c8dd3be-6e22-4240-a626-dc83b7a646d7" path="/var/lib/kubelet/pods/0c8dd3be-6e22-4240-a626-dc83b7a646d7/volumes" Feb 24 10:52:48 crc kubenswrapper[4755]: I0224 10:52:48.661318 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59160: no serving certificate available for the kubelet" Feb 24 10:52:49 crc kubenswrapper[4755]: I0224 10:52:49.577899 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59164: no serving certificate available for the kubelet" Feb 24 10:52:50 crc kubenswrapper[4755]: I0224 10:52:50.316695 4755 scope.go:117] "RemoveContainer" containerID="fb15228cab6b3b0ebbb161d141dbaf89fb26137ce9638882c45dd4e7edffc6ad" Feb 24 10:52:50 crc kubenswrapper[4755]: E0224 10:52:50.317608 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(2f320527-691f-48e9-a243-f60bc805da39)\"" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" Feb 24 10:52:51 crc kubenswrapper[4755]: I0224 10:52:51.695291 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59168: no serving certificate available for the kubelet" Feb 24 10:52:52 crc kubenswrapper[4755]: I0224 10:52:52.620836 4755 ???:1] "http: TLS handshake error from 192.168.126.11:59172: no serving certificate available for the kubelet" Feb 24 10:52:54 crc kubenswrapper[4755]: I0224 10:52:54.748047 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46966: no serving certificate available for the kubelet" Feb 24 10:52:55 crc kubenswrapper[4755]: I0224 10:52:55.668355 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46968: no serving certificate available for the kubelet" Feb 24 10:52:57 crc kubenswrapper[4755]: I0224 10:52:57.316920 4755 scope.go:117] "RemoveContainer" containerID="143341025cc371f0627ece60e54135ae4e4726a34d9c2a806765f32736f60a34" Feb 24 10:52:57 crc kubenswrapper[4755]: E0224 10:52:57.317845 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-galera-0_openstack(fe13802e-a28d-4e11-a315-c0ae66bf0e1c)\"" pod="openstack/openstack-galera-0" podUID="fe13802e-a28d-4e11-a315-c0ae66bf0e1c" Feb 24 10:52:57 crc kubenswrapper[4755]: I0224 10:52:57.808839 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46984: no serving certificate available for the kubelet" Feb 24 10:52:58 crc kubenswrapper[4755]: I0224 10:52:58.788833 4755 ???:1] "http: TLS handshake error from 192.168.126.11:46998: no serving certificate available for the kubelet" Feb 24 10:53:00 crc kubenswrapper[4755]: I0224 10:53:00.874568 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47006: no serving certificate available for the kubelet" Feb 24 10:53:01 crc kubenswrapper[4755]: I0224 10:53:01.843496 4755 ???:1] "http: TLS handshake error from 192.168.126.11:47022: no serving certificate available for the kubelet" Feb 24 10:53:02 crc kubenswrapper[4755]: I0224 10:53:02.317170 4755 scope.go:117] "RemoveContainer" containerID="fb15228cab6b3b0ebbb161d141dbaf89fb26137ce9638882c45dd4e7edffc6ad" Feb 24 10:53:02 crc kubenswrapper[4755]: E0224 10:53:02.317699 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(2f320527-691f-48e9-a243-f60bc805da39)\"" pod="openstack/openstack-cell1-galera-0" podUID="2f320527-691f-48e9-a243-f60bc805da39" Feb 24 10:53:03 crc kubenswrapper[4755]: I0224 10:53:03.925716 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41854: no serving certificate available for the kubelet" Feb 24 10:53:04 crc kubenswrapper[4755]: I0224 10:53:04.898299 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41866: no serving certificate available for the kubelet" Feb 24 10:53:06 crc kubenswrapper[4755]: I0224 10:53:06.979306 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41874: no serving certificate available for the kubelet" Feb 24 10:53:07 crc kubenswrapper[4755]: I0224 10:53:07.938720 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41886: no serving certificate available for the kubelet" Feb 24 10:53:10 crc kubenswrapper[4755]: I0224 10:53:10.043143 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41900: no serving certificate available for the kubelet" Feb 24 10:53:10 crc kubenswrapper[4755]: I0224 10:53:10.985252 4755 ???:1] "http: TLS handshake error from 192.168.126.11:41916: no serving certificate available for the kubelet"